Search This Blog

Showing posts with label Mind over matter. Show all posts
Showing posts with label Mind over matter. Show all posts

Sunday, 17 March 2024

The mind is as real as the brain?

 Consciousness Observes Different Laws from Physics


Robert Lawrence Kuhn interviewed British philosopher and pastor Keith Ward on “What’s the Stuff of Mind and Brain?” Ward is an idealist philosopher who “believes that the material universe is an expression or creation of a Supreme Mind, namely that of God.” 

He explains how we can know that the mind is not simply what the brain does. One way is that the mind or consciousness functions according to different rules:

Kuhn: [5:53] Keith, what is it that we need to combine with the brain to make this non-material consciousness?

Ward: [6:04] Well, you need — what Buddhists would say is — thoughts and feelings and sensations and perceptions. And this is a stream of, believe it or not, consciousness. And that is something which is at least partly produced by the brain. It’s causally correlated with events in the brain, that is to say, but it also has its own psychical or spiritual or mental forms of causation.

So let me give you one example. [6:35] If I go through a mathematical calculation, I don’t know what’s happening in my brain at all. And I don’t believe that when I get a logically correct result and I say — amazingly, 2 plus two does equal 4 — I don’t believe that that is produced by purely physical laws in the brain. It is a logical calculation and there are laws of thought which produce it. So that’s what you need.

Kuhn: [6:57] So Keith, do you need something like a soul to combine with the brain to make consciousness?

Ward: [7:04] That’s a loaded word. I think the most important distinction I would make is between the laws of physics, which are mechanical in the sense they’re not directed, they’re not for the sake of anything, they’re just proceeding in accordance with mathematical equations … To contrast the laws of physics with the laws of thought, which you use in mathematical calculations for example, … you’ve got a criterion of correctness… the laws of mathematical and logical thinking are not reducible to or statable in terms of laws of physics or of any known science. So there must at least be two completely different ways of understanding what human beings are, a physical way and a way concerned with thinking — and I would say feeling and perception as well. And these you have to put these two together and I believe that nobody on Earth knows how to do that.

Ward is stressing that it is only in the intellectual world that concepts like correct vs. incorrect (or right vs. wrong) are meaningful. That’s a different world from the one created by physics. The unacknowledged difference between the two is one of the reasons materialist philosophies are not working out well in the study of consciousness.


Tuesday, 12 March 2024

The ego that cogitates is beyond the grasp of the physical sciences?

 “Lived Experience” Is Science’s Blind Spot


Seriously, last month we noted an article by University of Rochester astrophysicist Adam Frank at Big Think. There he protested the use of the term “hallucinate” to describe absurd chatbot glitches: “Its mistake is not a matter of making a false statement about the world because it doesn’t know anything about the world. There is no one in there to know anything about anything.”

In that short essay, he mentioned that he and two colleagues — Dartmouth College theoretical physicist Marcelo Gleiser and philosopher Evan Thompson — would publish a book this month, The Blind Spot: Why Science Cannot Ignore Human Experience offering a bigger picture. Now that the book is out, they talk a bit more about it:

Cosmology tells us that we can know the Universe and its origin only from our inside position, not from the outside. We live within a causal bubble of information — the distance light traveled since the Big Bang — and we cannot know what lies outside. Quantum physics suggests that the nature of subatomic matter cannot be separated from our methods of questioning and investigating it. In biology, the origin and nature of life and sentience remain a mystery despite marvelous advances in genetics, molecular evolution, and developmental biology. Ultimately, we cannot forgo relying on our own experience of being alive when we seek to comprehend the phenomenon of life. Cognitive neuroscience drives the point home by indicating that we cannot fully fathom consciousness without experiencing it from within. 

ADAM FRANK AND MARCELO GLEISER AND EVAN THOMPSON, THE “BLIND SPOT” IN SCIENCE THAT’S FUELING A CRISIS OF MEANING, BIG THINK, MARCH 7, 2024

The Heart of Science

What about the grand narratives of science? “At the heart of science lies something we do not see that makes science possible, just as the blind spot lies at the heart of our visual field and makes seeing possible.”

The tragedy the Blind Spot forces on us is the loss of what’s essential to human knowledge — our lived experience. The Universe and the scientist who seeks to know it become lifeless abstractions. Triumphalist science is actually humanless, even if it springs from our human experience of the world. This disconnection between science and experience, the essence of the Blind Spot, lies at the heart of the many challenges and dead ends science currently faces in thinking about matter, time, life, and the mind. 

FRANK, GLEISER AND THOMPSON, A CRISIS OF MEANING

What Gets Ignored

They are right about the dead ends. But is it true that the dead ends result merely from ignoring human experience? Surely, what’s ignored (or, more usually, denied or forbidden for discussion) is the immaterial nature of the human mind. Also off the table are questions like whether a cosmos where some beings (ourselves) clearly have immaterial intelligence can be created if an Intelligence does not underlie the universe. It’s quite likely that some fundamental questions cannot be answered within the allowed materialist framework.

But it’s interesting to see that these three thinkers are posing the questions — at least in this essay — in an open-ended way, almost as if they sense that dredging up pat materialist answers that don’t really work won’t help much.

Saturday, 24 February 2024

Determinism is theatre?

 Reply to Free Will Deniers: Show Me


Free will denial is a cornerstone of materialist–determinist ideology. We are, say the deniers, purely physical machines, meat robots.

Atheist-materialist evolutionary biologist Jerry Coyne is a prominent proponent of deterministic free will denial, and there are many others — philosopher Stephen Cave, biologist Robert Sapolsky, author Sam Harris, attorney Clarence Darrow, to name just a few.

From Harris:

How can we be “free” as conscious agents if everything that we consciously intend is caused by events in our brain that we do not intend and of which we are entirely unaware?… My choices matter — and there are paths towards making wiser ones — but I cannot choose what I choose. And if it ever appears that I do — for instance, after going back between two options — I do not choose to choose what I choose. There is a regress here that always ends in darkness.

Free will deniers invariably acknowledge that we have the ineluctable sense of freely choosing, and that our belief in free will is a cornerstone of human psychology, of our social interaction, of our moral codes and of our judicial system. Nonetheless, deniers claim, we are deluded. We are not free at all — we are slaves to the laws of physics and chemistry that govern the physiology of our brains.

How Do We Know What Words Really Mean?

What to make of this bizarre viewpoint that we have no genuine freedom to choose — a viewpoint that is contrary to the lived experience of every human being? It is helpful to consider the question on a different level — not “do we have free will?,” but rather “what does it mean to believe we don’t have free will?”.

What does it mean to believe anything? Philosopher Ludwig Wittgenstein (1889–1951) critiqued our conventional understanding of the “meaning” of words and I think he sheds light on what both meaning and belief really are. He pointed out (in his middle work, most notably The Blue Book ) that it is confused to say that the meaning of a word is assigned via an interior mental act or act of interpretation. Why do we attribute the meaning of a word to brain physiology, when we could just as plausibly attribute it to the physiology of the larynx, tongue or hand when we speak or write the word?

Meaning, according to Wittgenstein, is just the way the word is used in life. Meaning, in a sense, is use. It is common for a word to have several different meanings, depending on the context in which it is used.

Even the word believe itself has several meanings depending on use — “I believe it’s going to rain,” “I believe in you,” “I believe that I will have a ham sandwich,” etc. The difference in the meaning of believe in these instances is in the context of use — what we mean by believe is determined by the context (the gestalt) in which we use the word. To believe something is to behave in a certain way.

Belief is behavior. The belief-behavior can include speaking or writing the belief, of course, but belief is behavior in a much broader sense than merely speaking. Belief is what you do, not merely what you say. Consider the statement by a serial adulterer “I believe in fidelity and chastity.” Of course, such a claim is not credible, because his behavior makes a mockery of that belief. Serial adulterers believe in serial adultery (otherwise, they wouldn’t do it), just as embezzlers believe in embezzlement and philanthropists believe in philanthropy. Belief is much more than words — it is, to use Wittgenstein’s phrase, a form of life. Belief is a way of living.

So, do free will deniers really believe that free will isn’t real? Of course not. Free will deniers live as if free will is real, despite their proclamations and their blog posts. What matters is what they do, not merely what they say. Every human being lives life as if free will is real. We all believe — as demonstrated by our behavior — in the fact that we choose some options and not others, that we have real moral accountability, that there is such a thing as justice. No one (outside of a mental hospital) really believes that we are meat robots without free will.

If you want to know what a free will denier really believes, steal his laptop or dent his fender and see if he holds you morally accountable.

So What’s Free Will Denial Really All About?

So what are free will deniers really doing when they say that they don’t believe in free will, but never act like free will isn’t real? Free will denial is determinist signaling, in which materialists flaunt their bona fides. It is analogous to a political yard sign or a cross worn around the neck.

It’s a way of announcing to the world who you are — whether or not you really believe (i.e., behave in accordance with) your politics or your faith. The difference between a political belief expressed on a sign or faith expressed via a pendant and free will denial is that sometimes the sign or cross do correspond to a way of life, and thus are real expressions of belief. Free will denial, on the other hand, never constitutes genuine belief, because it is not possible to live as if free will isn’t real.

Free Will Denial as Performance Art

Materialists don’t really mean it because they never do it. To truly believe that free will isn’t real — to believe that our actions are wholly determined by our brain chemicals, for which we have no moral responsibility whatsoever— is to utterly abandon any real sense of morality, to deny not only the salience but even the meaning of right and wrong behavior. It means to live every moment as if you and all people on earth are meat robots, utterly devoid of choice or free agency. A person who really believed that free will isn’t real wouldn’t hold a murderer morally responsible for murder, any more than the gun or the bullet is. If you carelessly dent a genuine free will denier’s car in a parking lot, he wouldn’t hold you responsible any more than he’d hold your car responsible.

So the next time a LARPing materialist declares to you that he doesn’t believe in free will, say this: “Your free will denial is performance art. What you do is immeasurably louder than what you say. You don’t really believe that free will isn’t real, unless you live like it isn’t real.”

Sunday, 18 February 2024

The mind contemplates itself?

 Consciousness, a Hall of Mirrors, Baffles Scientists


To contemplate consciousness is, as professor of religion Greg Peterson put it, like looking into and out of a window at the same time. No surprise then that philosophers of science call it the Hard Problem of Consciousness. The inexorable progress of brain imaging was supposed to dissolve the conundrum but we spoil no surprise by saying that new information and insights only deepened it.

Among the many quests, one has been to discover the seat of consciousness. An image rises unprompted. Seat? Does consciousness have a seat at the table? Wait a minute. Isn’t consciousness the table? You see the difficulty, of course. At any rate, the search is for the specific bit of the brain that spews out the unthinking electrical charges that create consciousness.

It’s been a long and winding road. Brain imaging has not turned out to be a road map of the mind. For example, functional MRI imaging only tells researchers where blood is traveling in the brain. The problem is, as a Duke University research group pointed out, “the level of activity for any given person probably won’t be the same twice, and a measure that changes every time it is collected cannot be applied to predict anyone’s future mental health or behavior.”

Rise and Fall of the Lizard Brain

The most widely popularized theory of mind — the triune brain theory — depends on organization rather than imaging. Originally developed by Yale University physiologist and psychiatrist Paul D. MacLean (1913–2007) decades ago and promoted by celebrity skeptic Carl Sagan (1934–1996), it divides the brain into three parts. The reptilian brain controls things like movement and breathing, the mammalian brain controls emotion, and the human cerebral cortex controls language and reasoning.

This approach resulted in immensely reassuring ideas; for example, a widely disliked boss or politician morphed into a “dinosaur brain.” In 2021, Jeff Hawkins, inventor of the PalmPilot (a smartphone predecessor) even claimed to have figured out how human intelligence works, relying on his model of the mammalian brain.

The human brain was bound to disappoint pop culture in this matter because key functions are distributed throughout. Also triune brain theory doesn’t square with the high animal intelligence recently found in (non-vertebrate) octopuses. Claims for the mammalian brain in particular don’t square with the high intelligence found in some birds. Let alone with the fact that human consciousness remains an absolute outlier.

But MacLean’s idea has proven much too culturally satisfying to be spoiled by mere neuroscience. As one research team notes, “despite the mismatch with current understandings of vertebrate neurobiology, MacLean’s ideas remain popular in psychology. (A citation analysis shows that neuroscientists cite MacLean’s empirical articles, whereas non-neuropsychologists cite MacLean’s triune-brain articles.)”

It’s All in the Connections

Never mind, the exciting new world of -omes (genomes, epigenomes, biomes…) beckons. The connectome — essentially, a complete “wiring diagram” of the brain, might possibly identify human consciousness. In 2010, computational neuroscientist Sebastian Seung told humanity, “I am my connectome,” a thought on which he expanded in his 2012 book, Connectome: How the Brain’s Wiring Makes Us Who We Are. In 2012, National Institutes of Health director Francis Collins was thinking along the same lines: “Ever wonder what is it that makes you, you? Depending on whom you ask, there are a lot of different answers, but these days some of the world’s top neuroscientists might say: ‘You are your connectome.’”

That moment has passed. Harvard neuroscientist Jeff Lichtman, who is trying to map the brain, surveys the awful complexity nearly a decade later and sums up,

…if I asked, “Do you understand New York City?” you would probably respond, “What do you mean?” There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, “I now understand the brain,” just as you wouldn’t say, “I now get New York City.”

GRIGORI GUITCHOUNTS, “AN EXISTENTIAL CRISIS IN NEUROSCIENCE,” NAUTILUS, JANUARY 22, 2020

In short, once we are into abstractions, we are no longer dealing with the concrete substance of the brain.

It’s All in the Electricity

But what about the bioelectric fields that swarm throughout the brain? Bioelectric currents, unlike electric currents, rely on ions rather than electrons but they are still electricity. Evolutionary biologist and lawyer Tam Hunt tells us, “Nature seems to have figured out that electric fields, similar to the role they play in human-created machines, can power a wide array of processes essential to life. Perhaps even consciousness itself.” That’s a remarkable idea because it includes the notion that our individual cells exhibit consciousness: “Something like thinking, they argue, isn’t just something we do in our heads that requires brains. It’s a process even individual cells themselves, and not requiring any kind of brain, also take part in.”

This sounds cool but gets us nowhere. We have no reason to believe that our individual brain cells are conscious; what we know is that we are conscious as whole human beings. We could say the same about claims that everything is conscious (panpsychism) or that nothing is (eliminativism). Whatever else the claims do, they shed no light on the conundrum at hand.

Consciousness as an Undetected State of Matter

Max Tegmark, MIT physicist and author of Our Mathematical Universe: My Quest for the Ultimate Nature of Reality (Knopf, 2014), goes still further. He suggests that consciousness is a so far undetected state of matter, perceptronium, “defined as the most general substance that feels subjectively self-aware.” Which, again, gets us precisely nowhere.

Prominent neuroscientist Christof Koch notes more mundanely that physical distance in the brain matters: “A new study documents an ordering principle to these effects: the farther removed from sensory input or motor output structures, the less likely it is that a region contributes to consciousness.” And that’s about as far as neuroscience has got.

Koch has also written a book, The Feeling of Life Itself (MIT Press, 2019), where he tells us, among many other things, of dogs, Der Ring des Nibelungen, sentient machines, the loss of his belief in a personal God, and sadness, all seen as “signposts in the pursuit of his life’s work — to uncover the roots of consciousness.” And that is where we must leave the subject for now. We are back where we started — but we do have interesting books.

Thursday, 8 February 2024

The inspiration and creativity of actual intelligence vs. Running of algorithmic programs by artificial intelligence.

 Artificial General Intelligence: The Oracle Problem


In computer science, oracles are external sources of information made available to otherwise self-contained algorithmic processes. Oracles are in effect “black boxes” that can produce a solution for any instance of a given problem, and then supply that solution to a computer program or algorithm. For example, an oracle that could provide tomorrow’s price for a given stock could be used in an algorithm that today — with phenomenal returns — executes buy-and-sell orders for that stock. Of course, no such oracle actually exists (or if it does, it is a closely guarded secret). 

The point of oracles in computer science is not whether they exist but whether they can help us study aspects of algorithms. Alan Turing proposed the idea of an oracle that supplies information external to an algorithm in his 1938 doctoral dissertation. Some oracles, like tomorrow’s stock predictor, cannot be represented algorithmically. Others can, but the problems they solve may be so computationally intensive that no real-world computer could solve them. The concept of an oracle is important in computer science for understanding the limits of computation.

“Sing, Goddess, of the Anger of Achilles”

Turing’s choice of the word “oracle” was not accidental. Historically, oracles have denoted sources of information where the sender of the information is divine and the receiver is human. The Oracle of Delphi stands out in this regard, but there’s much in antiquity that could legitimately count as oracular. Consider, for instance, the opening of Homer’s Iliad: “Sing, goddess, of the anger of Achilles, son of Peleus.” The goddess here is one of the muses, presumably Calliope, the muse of epic poetry. In the ancient world, the value of artistic expression derived from its divine inspiration. Of course, prophecy in the Bible also falls under this conception of the oracular, as does real-time divine guidance of the believer’s life (as described in Proverbs 3:5–6 and John 16:13). 

Many of us are convinced that we have received information from oracles that can’t be explained in terms of everyday communication among people or everyday operations of the mind. We use many words to describe this oracular flow of information: inspiration, intuition, creative insight, dreams, reverie, collective unconscious, etc. Sometimes the language used is blatantly oracular. Einstein, for instance, told his biographer Banesh Hoffmann, “Ideas come from God.” Because Einstein did not believe in a personal God (Einstein would sometimes say he believed in the God of Spinoza), Hoffmann interpreted Einstein’s remark metaphorically to mean, “You cannot command the idea to come. It will come when it’s good and ready.” 

The Greatest Mathematician of His Age

Now granted, computational reductionists will dismiss such oracular talk as misleading nonsense. Really, all the information is there in some form already in the computational systems that make up our minds, and even though we are not aware of how the information is being processed, it is being processed nonetheless in purely computational and mechanistic ways. Clearly, this is what computational reductionists are bound to say. But the testimony of people in which they describe themselves as receiving information from an oracular realm needs to be taken seriously, especially if we are talking about people of the caliber of Einstein. Consider, for instance, how Henri Poincaré (1854–1912) described the process by which he made one of his outstanding mathematical discoveries. Poincaré was the greatest mathematician of his age (in 1905 he was awarded the Bolyai Prize ahead of David Hilbert). Here is how he described his discovery:

For fifteen days I strove to prove that there could not be any functions like those I have since called Fuchsian functions. I was then very ignorant; every day I seated myself at my work table, stayed an hour or two, tried a great number of combinations and reached no results. One evening, contrary to my custom, I drank black coffee and could not sleep. Ideas rose in crowds; I felt them collide until pairs interlocked, so to speak, making a stable combination. By the next morning I had established the existence of a class of Fuchsian functions, those which come from the hypergeometric series; I had only to write out the results, which took but a few hours. Then I wanted to represent these functions by the quotient of two series; this idea was perfectly conscious and deliberate, the analogy with elliptic functions guided me. I asked myself what properties these series must have if they existed, and I succeeded without difficulty in forming the series I have called theta-Fuchsian.

Just at this time I left Caen, where I was then living, to go on a geologic excursion under the auspices of the school of mines. The changes of travel made me forget my mathematical work. Having reached Coutances, we entered an omnibus to go some place or other. At the moment when I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’ sake I verified the result at my leisure.

Again, the computational reductionist would contend that Poincaré’s mind was in fact merely operating as a computer. Accordingly, the crucial computations needed to resolve his theorems were going on in the background and then just happened to percolate into consciousness once the computations were complete. But the actual experience and self-understanding of thinkers like Einstein and Poincaré, in accounting for their bursts of creativity, is very different from what we expect of computation, which is to run a computer program until it yields an answer. Humanists reject such a view of human creativity. Joseph Campbell, in The Power of Myth, offered this rejoinder to computational reductionism: “Technology is not going to save us. Our computers, our tools, our machines are not enough. We have to rely on our intuition, our true being.” Of course, artists of all stripes have from ages past to the present invoked muses of one form or another as inspiring their work. 

A Clash of Worldviews?

Does this controversy over the role of oracles in human cognition therefore merely describe a clash of worldviews between a humanism that refuses to reduce our humanity to machines and a computational reductionism that embraces such a reduction? Is this controversy just a difference in viewpoints based on a difference in first principles? In fact, oracles pose a significant theoretical and evidential challenge to computational reductionism that goes well beyond a mere collision of worldviews. Computational reductionism faces a deep conceptual problem independent of any worldview controversy.

Computational reductionism faces an oracle problem. The problem may be described thus: Our most advanced artificial intelligence systems, which I’m writing about in this series about Artificial General Intelligence (AGI), require input of external information to keep them from collapsing in on themselves. This problem applies especially to large language models (LLMs) and their most advanced current incarnation, ChatGPT-4. I’m not talking here about the role of human agency in creating LLMs, which no one disputes. I’m not even talking here about all the humanly generated data that these neural networks ingest or all the subsequent training of these systems by humans. What I’m talking about here is that once all this work is done, these systems cannot simply be set loose and thrive on their own. They need continual propping up from our human intelligence. For LLMs, we are the oracles that make and continue to make them work. 

The Death Knell for AGI

The need for ongoing human intervention in these systems may seem counterintuitive. It is also the death knell for AGI. Because if AGI is to succeed, it must surpass human intelligence, which means it must be able to leave us behind in the dust, learning and growing on its own, thriving and basking in its own marvelous capabilities. Like Aristotle’s unmoved mover God, who does not think about humanity or anything other than himself because it is in the nature of God only to think about the highest thing, and the highest thing of all is God. Thus, the Aristotelian God spends all his time contemplating only himself. A full-fledged AGI would do likewise, not deigning to occupy itself with lesser matters. (As an aside, AGI believers might take comfort in an AGI being so self-absorbed that it would not bother to destroy humanity. But to the degree that flesh-and-blood humans are a threat, or even merely an annoyance, to an AGI, it may be motivated to kill us all so as not to be distracted from contemplating itself!)

Unlike the Aristotelian God, LLMs do not thrive without human oracles continually feeding them novel information. There are sound mathematical reasons for this. The neural networks that are the basis for LLMs reside in finite dimensional vector subspaces. Everything in these spaces can therefore be expressed as a linear combination of finitely many basis vectors. In fact, they are simplexes and the linear combinations are convex, implying convergence to a center of mass, a point of mediocrity. When neural networks output anything, they are thus outputting what’s inherent in these predetermined subspaces. In consequence, they can’t output anything fundamentally new. Worse yet, as they populate their memory with their own productions and thereafter try to learn by teaching themselves, they essentially engage in an act of self-cannibalism. In the end, these systems go bankrupt because intelligence by its nature requires novel insights and creativity, which is to say, an oracle. 

Research backs up this claim that LLMs run aground in the absence of oracular intervention, and specifically external information added by humans. This becomes clear from the abstract of a recent article titled “The Curse of Recursion: Training on Generated Data Makes Models Forget“:

GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks… What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

Think of It This Way

LLMs like ChatGPT are limited by a fixed finite number of dimensions, but the creativity needed to make these artificial intelligence models thrive requires added dimensions. Creativity is always orthogonal to the status quo, and orthogonality, by being at right angles with the status quo, always adds new dimensions. Oracles add such creativity. Without oracles, artificial intelligence systems become solipsistic, turning in on themselves, rehashing only what is in them already, and eventually going bankrupt because they cannot supply the daily bread needed to sustain them. AGI’s oracle problem is therefore real and damning. 

But if AGI faces an oracle problem, don’t humans likewise face an oracle problem? Suppose AGIs require human oracles to thrive. Yet if oracles are so important for creativity, don’t humans need access to oracles as well? But how, asks the computational reductionist, does the external information needed for human intelligence to thrive get to us and into us? A purely mechanistic world is a solipsistic world with all its information internal and self-generated. On mechanistic principles, there’s no way for humans to have access to such oracles.

But why think that the world is mechanistic? Organisms, as we’ve seen, give no signs of being mechanisms. And physics allows for an informationally porous universe. Quantum indeterminacy, for instance, cannot rule out the input of information from transcendent sources. The simplest metaphor for understanding what’s at stake is the radio. If we listen to a symphony broadcast on the radio, we don’t think that the radio is generating the music we hear. Instead, the radio is a conduit for the music from another source. Humans are such conduits. And machines need to be such conduits (for ongoing human intelligent input) if they are to have any real value to us. 

Saturday, 3 February 2024

Scientists' ruminations on ruminants' EQ

 Researchers: Goats Can Read Basic Human Emotions


Readers may wonder at first whether this research was worth doing, but hang on. It turns out that goats can understand basic human emotions by voice alone, according to University of Hong Kong research, co-led by Prof Alan McElligott at City University of Hong Kong and Dr. Marianne Mason of London’s University of Roehampton:

In the experiment, goats listened to a series of voice playbacks expressing either a positive (happy) or a negative (angry) valence during the habituation phase, i.e., when the goat becomes accustomed to the human voice and valence, so they would respond less as the phase progressed. The recording was then switched from a positive to a negative valence (or vice versa) before being reversed.

“We predicted that if goats could discriminate emotional content conveyed in the human voice, they would dishabituate, looking faster and for longer towards the source of the sound, following the first shift in valence,” said Dr. Marianne Mason, University of Roehampton, UK.

MICHAEL GIBB, CITY UNIVERSITY OF HONG KONG, “RESEARCH SHOWS GOATS CAN TELL IF YOU ARE HAPPY OR ANGRY BY YOUR VOICE ALONE,” PHYS.ORG. THE PAPER IS OPEN ACCESS

When the emotional valence changed, 75 percent of the goats looked at the speaker for a longer time. That suggested that the goats had indeed sensed a change in emotional content.

Dogs, Horses, Livestock

Dogs and horses are well known to be sensitive to human emotions but, it can be argued, that is why humans form close relationships with them. What about livestock — animal species that we work with, and maybe live with, but are less likely to bond with? If they also can sense human emotions, that fact should be factored into their care, the researchers argue:

 the results are essential for adding to our understanding of animal behaviour, welfare and emotional experiences, especially since goats and other livestock will hear the human voice in their daily lives. Negatively valenced voices, like angry ones, may cause fear in animals. In contrast, positive ones may be perceived as calming and may even encourage animals to approach and help with human-animal bonding. 

CITY UNIVERSITY OF HONG KONG, “BY YOUR VOICE ALONE”

Reason and Moral Choice

It shouldn’t be very surprising if a wide range of animals can understand the most basic human emotional states, like contentedness vs. anger or maybe fear. After all, those are precisely the elements of the mind that we all share. What animals don’t understand are reason and moral choice, the elements we don’t share. The goat may sense that the human is angry but she does not think “I wonder if he is angry because the price of feed has gone up?” or “It’s not morally right for him to go around shouting at everyone like that! It’s not our fault if the feed price went up!” She responds to simple emotion because that is what she understands. In the same way, humans can understand, and even study, the animal emotions we share.

Interestingly, this distinction plays a role in arguments about the immortality of the soul. As philosopher Edward Feser writes, “ … it is because human beings are rational animals that our souls can survive the deaths of our bodies, since … rational or intellectual powers are essentially incorporeal.” The underlying assumption is that abstractions, ideas, and moral principles are immaterial (incorporeal); thus the aspect of our minds that apprehends them must be too. The basic emotions that we share with animals are, on that view, more rooted in physical nature.

One outcome of this view, of course, is that, as Christof Koch has complained, it meant that no dogs, including his beloved Purzel, go to heaven. However, C. S. Lewis had thought of a possible way around that problem. For more on that story, see “Do Any Dogs Go to Heaven? If So, Why?”

Ps. With all due respect to Mr. Feser,his comments perfectly illustrate how false religious ideas like reductive spiritualism short circuit clear thinking . Obviously emotions like joy,and anger are as immaterial as our moral sense. So to claim that the ability to experience/comprehend emotion is not as much evidence of a reductive spirit soul(if indeed there is such a thing) as possessing a moral sense is arbitrary. Another consideration is that at the beginning of our lives our moral sense is probably on the same level as a dog's or a cat's, is this evidence that this supposed reductive spirit soul is absent at that stage of development.

Friday, 21 July 2023

The brain :Jack of all trades

 How Can a Woman Missing Her Olfactory Bulbs Still Smell?


Even since neuroscientists started imaging the brain, they’ve been turning up cases where people are missing brain parts we would expect them to need in order to do something — but they are doing that very thing anyway. One example, written up in Live Science in 2019, concerns women who are missing their olfactory bulbs but can still smell.

Researchers have discovered a small group of people that seem to defy medical science: They can smell despite lacking “olfactory bulbs,” the region in the front of the brain that processes information about smells from the nose. It’s not clear how they are able to do this, but the findings suggest that the human brain may have a greater ability to adapt than previously thought.

YASEMIN SAPLAKOGLU, “WOMEN MISSING BRAIN’S OLFACTORY BULBS CAN STILL SMELL, PUZZLING SCIENTISTS,” LIVESCIENCE,NOVEMBER 6, 2019. THE PAPER IN NEURON IS OPEN ACCESS.

All the More Remarkable

The story is all the more remarkable when we consider that her sense of smell was especially good; that was why she had signed up for the Israeli researchers’ study. Deciding to pursue the matter, the researchers tested other women. On the ninth try, they found another left-handed woman who could smell without an olfactory bulb.

A researcher who was not involved in the study, Joel Mainland of the Monell Chemical Senses Center in Philadelphia, was asked for comment:

The findings are “pretty counter to most of what the field thinks,” Mainland told Live Science. “I think it’s pretty critical that we figure out what’s happening.”

Yes. But that could take a while because there are a number of similar situations out there.

Last year, Medical Express reported on a woman who lacked a left temporal lobe, believed to be the language area of the brain:

EG told Fedorenko and her team that she only came to realize she had an unusual brain by accident—her brain was scanned in 1987 for an unrelated reason. Prior to the scan she had no idea she was different. By all accounts she behaved normally and had even earned an advanced degree. She also excelled in languages — she speaks fluent Russian — which is all the more surprising considering the left temporal lobe is the part of the brain most often associated with language processing.

Eager to learn more about the woman and her brain, the researchers accepted her into a study that involved capturing images of her brain using an fMRI machine while she was engaged in various activities, such as language processing and math. In so doing, they found no evidence of language processing happening in the left part of her brain; it was all happening in the right. They found that it was likely the woman had lost her left temporal lobe as a child, probably due to a stroke. The area where it had been had become filled with cerebrospinal fluid. To compensate, her brain had developed a language network in the right side of her brain that allowed her to communicate normally. The researchers also learned that EG had a sister who was missing her right temporal lobe, and who also had no symptoms of brain dysfunction — an indication, the researchers suggest, that there is a genetic component to the stroke and recovery process in the two women.

BOB YIRKA, “WOMAN WITH NO LEFT TEMPORAL LOBE DEVELOPED A LANGUAGE NETWORK IN THE RIGHT SIDE OF HER BRAIN,” MEDICAL XPRESS,APRIL 14, 2022 THE PAPER IS OPEN ACCESS.

It’s also come out that one in 4000 people lacks a corpus callosum. That’s the structure of neural fibers that transfers information between the brain’s two hemispheres. It would seem a pretty important part pf the brain yet 25 percent of those who lack it show no symptoms. The others suffer mild to severe cognitive disorders. But we may well wonder how people manage in this situation at all:

In a study published in the journal Cerebral Cortex, neuroscientists from the University of Geneva (UNIGE) discovered that when the neuronal fibres that act as a bridge between the hemispheres are missing, the brain reorganises itself and creates an impressive number of connections inside each hemisphere. These create more intra-hemispheric connections than in a healthy brain, indicating that plasticity mechanisms are involved. It is thought that these mechanisms enable the brain to compensate for the losses by recreating connections to other brain regions using alternative neural pathways.

UNIVERSITÉ DE GENÈVE, “A MALFORMATION ILLUSTRATES THE INCREDIBLE PLASTICITY OF THE BRAIN,” SCIENCEDAILY, OCTOBER 30, 2020. THE PAPER IS OPEN ACCESS.

Prior to Brain Imaging

Recall that, prior to brain imaging, so long as a person was functioning normally, no one had any reason to suppose that a key brain part might simply be missing. And, let’s say its absence was discovered at autopsy. Who is to say that the absence of that part didn’t play some role in bringing about the person’s death? So it was only in recent decades that researchers discovered people of normal abilities with absent brain parts. That’s probably why we hear expressions like “seem to defy medical science” and “incredible plasticity” from the science media now.

Neuroplasticity is perhaps best understood as the human mind reaching out past physical gaps and barriers in any number of inventive ways. And it raises a question: If the mind is merely what the brain does, as many materialist pundits claim, what is the mind when the brain … doesn’t? At times, the mind appears to be picking up where the brain left off. 

Michael Egnor and I are looking forward to tackling topics like that in The Human Soul (Worthy, 2025).

Sunday, 9 July 2023

Mathematics: mother or daughter of creativity?

Is Mathematics Discovered or Invented?


Some think math is invented. (See an article by Peter Biles.) Evidence, though, points towards discovery. Simultaneous mathematical discovery supports this viewpoint. Many mathematical breakthroughs are sometimes independently reported by two or more mathematicians at roughly the same time. The most famous is the simultaneous discovery of calculus by Isaac Newton and Gottfried Wilhelm Leibniz. Newton was secretive about his discovery and shared his results with only a few members of the Royal Society. When Leibnitz published his discovery of the calculus, Newton charged him with Plagiarism. Today, historians agree that the discoveries were independent of each other.

Some Other Examples

Here are some other lesser-known examples of simultaneous discovery.

The Papoulis-Gerchberg Algorithm (PGA). The PGA is an ingenious method for recovering lost sections of functions that are bandlimited. (I describe the PGA in detail in my Handbook of Fourier Analysis.) The PGA was first reported by Athanasios Papoulis1 but was first published in an archival journal, independently, by Gerchberg2. The discoveries occurred independently of each other.

The Karhunen–Loève Theorem, independently discovered by Kari Karhunen3 and Michel Loève4, showed that certain random processes could be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series.  

Non-Euclidean Geometry. Euclid published Elements circa 300 BC. His work wonderfully established Euclidean geometry. It was only in the first half of the 19th century that three men — J´anos Bolyai, Carl Friedrich Gauss, and Nikolai Lobachevsky, independently discovered non-Euclidean geometry. Jenkovszky et al.5 note: “The striking coincidence of independent discoveries… after more than two thousand years of stagnation, may seem almost miraculous.”

Space-Variant Processing. Here’s a personal example. During my graduate work, I developed a method for performing general space-variant processing. My advisor, John F. Walkup, found out that the same method was simultaneously discovered at Stanford by his PhD advisor’s research group. Rather than competing, we agreed to publish all of our findings in the same issue of the journal Applied Optics.6-7

Einstein’s Shoulders

In the context of the argument for discovery, some inventions can curiously be considered discovered rather than invented. Isaac Newton famously said that “if I have seen further [than others], it is by standing on the shoulders of giants.” Einstein built on Newton’s discoveries in classic physics and, in turn, stood on Newton’s shoulders with the formulation of relativity. Modern physicists stand on Einstein’s shoulders. The advancement in technology can likewise be considered standing on an ever-increasing stack of shoulders. This is certainly the case in artificial intelligence. Rosenblatt and Widrow’s early work on AI led to discovery of error backpropagation neural network training that led to deep convolution neural networks, deep learning, and the generative AI we use today.

Inventions can be discovered. An example of an invention being discovered by two men is the telephone. Alexander Graham Bell is credited with inventing the telephone. But according to the Library of Congress:

Elisha Gray, a professor at Oberlin College, applied for a caveat of the telephone on the same day Bell applied for his patent of the telephone … Bell’s lawyer got to the patent office first. The date was February 14, 1876. He was the fifth entry of that day, while Gray’s lawyer was 39th. Therefore, the U.S. Patent Office awarded Bell with the first patent for a telephone, US Patent Number 174,465 rather than honor Gray’s caveat.

If true, both Gray and Bell were standing on the shoulders of those who proposed the telegraph and glimpsed the possibility of the telephone.

Philosophers might contemplate the similarity of the discovery of invention with the debate between predestination and free will. If inventions and advancements in mathematics are discovered, the future is, in a sense, predestined by our discoveries. The pros and cons of the debate will continue well beyond the arguments presented here.

References

A. Papoulis. A new method of image restoration. Joint Services Technical Activity Report, 39, 1973–74
R.W. Gerchberg. Super-resolution through error energy reduction. Optica Acta, Vol. 21, pp. 709–720, 1974.
Kari Karhunen ‘Zur Spektraltheorie Stochastischer Prozesse’, Ann. Acad. Sci. Fennicae, (1946), 37
Michel Loève ‘Probability Theory’, Princeton, N.J.: VanNostrand, 1955
László Jenkovszky, Matthew J. Lake, and Vladimir Soloviev. “János Bolyai, Carl Friedrich Gauss, Nikolai Lobachevsky and the New Geometry: Foreword.” Symmetry 15, no. 3 (2023): 707.
R.J. Marks II, J.F. Walkup, M.O. Hagler and T.F. Krile “Space-variant processing of one-dimensional signals,” Applied Optics, vol. 16, pp.739-745 (1977).
Joseph W. Goodman, Peter Kellman, and E. W. Hansen. “Linear space-variant optical processing of 1-D signals.” Applied Optics 16, no. 3 (1977): 733-738.

Tuesday, 27 June 2023

We are free to acknowledge free moral agency

 Free Will: What Are the Reasons to Believe in It?


University of Missouri psychology professor Kennon Sheldon’s message is neatly summed up in an opening statement: “Regardless of whether humans do or don’t have free will, psychological research shows it’s beneficial to act as if you do.”

The author of Freely Determined: What the New Psychology of the Self Teaches Us About How to Live (Basic Books, 2022) responds to philosophers who say that we do not have free will:

All my life, I’ve struggled with the question of whether humans have ‘free will’. It catalysed my decision to become a psychologist and continues to inspire my research to this day, especially as it relates to the kinds of goals people set for themselves, and the effects of goal-striving on people’s happiness and wellbeing.

I’ve come to the conclusion that people really do have free will, at least when it is defined as the ability to make reasoned choices among action possibilities that we ourselves think up…

Regardless of who is correct in this debate, my work has led me to a second conclusion that I consider even more important than whether we have free will or not. It’s that a belief in our own capacity to make choices is critical for our mental health. At the very least, this belief lets us function ‘as if’ we have free will, which greatly benefits us.

KENNON SHELDON, “THE THREE REASONS WHY IT’S GOOD FOR YOU TO BELIEVE IN FREE WILL,” PSYCHE, JUNE 15, 2023 

An Obvious Problem

Now, the obvious problem with his approach is that if we believe in free will simply because that belief is supposed to be good for our mental health, then we really don’t believe in it.

A simple example suffices: We sometimes hear that being optimistic is also better for mental health. In one sense, that’s true. If we focus on the positive things, our lives feel more pleasant and that is bound to be better for mental health. But what if we have no good reason for optimism? What if we live under an active volcano that shows signs of erupting? Optimism (“it probably won’t really happen this year”) could delay evacuation past the point of no return.

So let’s look back at free will in this light: If we believe that we have it — and that belief is true — we are empowered to deal with temptations and addictions, firm in the knowledge that we really can cast the deciding vote for our best possible outcome. But if free will is not true, we are setting ourselves up for delusion if we succeed and needless disappointment and misery if we fail. Not only that but we are participating in an unfair system where people are judged and punished for unwise or bad behavior that they cannot really help. So functioning “as if” we have free will turns out not to be very good at all.

“A Better Person”?

Sheldon goes on to say,

The second reason why I consider belief in free will to be beneficial is that it makes you a better person. Studies in social psychology show clearly that, if people become convinced that they have no free will, there can be negative effects on their ethical behaviour.

SHELDON, PSYCHE, 2023

Perhaps that’s true but it amounts to saying that perhaps we should be deluded for our own good. Even though delusions are said to be bad for us… Is there any light at the end of this tunnel? 

Sheldon offers a reason why some thinkers deny free will:

You might wonder why anyone would choose to believe in determinism, given the clear negative effects of this belief? There are several possible reasons. Some people might think that determinism is the most scientific and intellectually sophisticated position to take, and they like feeling smarter than others.

SHELDON, PSYCHE, 2023

Well, if science matters, the good news is that neuroscience provides sound reasons to believe in free will. As Stony Brook neurosurgeon Michael Egnor has pointed out, the work of neuroscience pioneer Benjamin Libet established that we certainly have “free won’t” — the ability to choose not to do something:

[W]hat he found was, when you made a decision to push the button [in a psychological experiment], you still had the brain wave that preceded the decision by half a second. But when you decided to veto pushing the button, there was no new brain wave at all. It was silent in terms of brain waves. But you did make the decision to veto. So he said that it wasn’t so much that you have free will but you have free won’t. That is, you have the ability to decide whether or not you are going to comply with what your brain is urging you to do. And that compliance is not material. It’s not a brain wave. It’s immaterial.

MICHAEL EGNOR, “HOW A NEUROSCIENTIST IMAGED FREE WILL (AND “FREE WON’T”),” MIND MATTERS NEWS, MARCH 19, 2020 

What Quantum Mechanics Shows

Physicist Marcelo Gleiser also notes that science does not really support the view that free will is an illusion: “[T]he mind is not a solar system with strict deterministic laws. We have no clue what kinds of laws it follows, apart from very simplistic empirical laws about nerve impulses and their propagation, which already reveal complex nonlinear dynamics.” In any event, quantum mechanics shows that nature is indeterminate at the fundamental level and that the observer’s decision of what to measure plays a role in what happens. One outcome is that a number of younger thinkers accept free will as consistent with the evidence.

In other words, we can accept free will based on the evidence. There is no particular need to think that it might be a possibly pleasant delusion.

Monday, 19 June 2023

Once more we turn to Gods of our own making?

At the Altar of that Hideous Strength

Peter Biles 

 C. S. Lewis’s 1946 science fiction novel That Hideous Strength is almost eighty years old now. Written during the throes of World War II, the novel is the culmination of Lewis’s cosmic trilogy, preceded by Out of the Silent Planet and Perelandra. There are hosts of other articles attending to the prescience of Lewis’s terrifying novel, and for good reason; That Hideous Strength is a warning against using technology to dehumanize people and ultimately cripple the world into submission. It’s a great book as a novel, but it seems especially appropriate to revisit in lieu of the growing interest in transhumanism and the rapid acceleration of AI development. 


It feels like much of the talk on AI in recent months involves its surface-level manifestations or consequences. It might take away jobs in journalism and help college kids cheat on exams. These are real concerns. The other dangers involving AI scams, disinformation, and deepfakes are formidable, too. And yet a novel like The Hideous Strength shows the danger behind the danger: the temptation to reject being merely human.

A N.I.C.E. Organization

The novelist’s protagonist, Mark Studdock, must decide whether he’ll opt into a scheme to destroy humanity via machine intelligence. Mark, a sociologist in training, gets caught up in a secret society known as N.I.C.E., which stands for the National Institute for Coordinated Experiments. N.I.C.E. is the perfect symbol of today’s benevolent-sounding yet bloated and banal administrative state, carrying out power initiatives that impact everyday people. While it’s unclear exactly what the overall goals of the Institute are, one thing is clear: it’s time to throw off the limits of being human and transcend into the world of pure intelligence. 


Mark Studdock and his wife, Jane, both have a lot to learn in the book. Mark is eager to find acceptance among elites at N.I.C.E., while Jane, who is a lapsed academic struggling to finish her dissertation on John Donne, longs for a kind of freedom and independence that her married life fails to afford her. Through their own journeys, both learn that accepting their limits and choosing to commit to each other is the real path to freedom. In the end, domestic family life, which includes birth, growth, and death, is envisioned as a kind of antidote to the mad quest for human immortality and domination. 

That Didn’t Take Long

Okay, so that was 1946. It’s 2023. OpenAI, ChatGPT, Altos Labs, bio-longevity; is any of that relevant to C. S. Lewis’s great book? Paul Kingsnorth thinks so. Kingsnorth writes often on the state of our tech-intoxicated culture. He doesn’t own a smartphone. He apologetically writes on Substack while decrying all the bad things the Internet has done to us. But, his voice is among the few out there pointing out how “merging ourselves with the Machine” will compromise our humanity. In a recent piece, “The Abbey of Misrule,” Kingsnorth asks what we gain by developing these new AI tools. He writes, 

Nearly sixty years back, the cultural theorist Marshall McLuhan offered a theory of technology which hinted at an answer. He saw each new invention as an extension of an existing human capability. In this understanding, a club extends what we can do with our fist, and a wheel extends what we can do with our legs. Some technologies then extend the capacity of previous ones: a hand loom is replaced by a steam loom; a horse and cart is replaced by a motor car, and so on.

What human capacity, then, is digital technology extending? The answer, said McLuhan, was our very consciousness itself. 

Paul KINGSNORTH, THE UNIVERSAL – BY PAUL KINGSNORTH – THE ABBEY OF MISRULE (SUBSTACK.COM)

While technologies made life more convenient, faster, or efficient, artificial intelligence is about extending human consciousness. Do we want that? What would that mean to our ability to think, understand, and reason on our own? Beyond that, AI at its worst will be a kind of divinity, a man-made God. At least, that’s what our friends over in the transhumanist camp would like. Kingsnorth continues, 

Transhumanist Martine Rothblatt says that by building AI systems ‘we are making God.’ Transhumanist Elise Bohan says ‘we are building God.’ Kevin Kelly believes that ‘we can see more of God in a cell phone than in a tree frog.’ ‘Does God exist?’ asks transhumanist and Google maven Ray Kurzweil. ‘I would say, “Not yet.”’ These people are doing more than trying to steal fire from the gods. They are trying to steal the gods themselves — or to build their own versions.

For the last two years, I have found myself writing a lot here about God; more than I had intended. I have claimed several times that there is a throne at the heart of every culture, and that someone is always going to sit on it. Humans are fundamentally religious animals. We are drawn towards transcendence whether we like it or not. But here in the West, we have dethroned our old god, and now we can barely look at him.

We Should All Keep Our Heads

Kingsnorth’s article is worth reading in full, and his Substack is consistently interesting and compelling. 

We don’t like to try and predict the future here at Mind Matters, since it seems so difficult to do. Nonetheless, C. S. Lewis’s novel and Kingsnorth’s warning rightly point out the dangers of depending too much on the machines we create. They might make us feel powerful, but in reality, they leave us weak. 

Peter Kreeft, a philosopher at Boston College, wrote about this idea in his great book The Philosophy of Tolkien: The Worldview Behind The Lord of the Rings. As the title indicates, the book goes into depth on Tolkien’s intricate worldview, conception of ethics, and the battle between good and evil. Kreeft writes, 

We have done exactly what Sauron did in forging the Ring. We have put our power into things in order to increase our power. And the result is, as everyone knows but no one admits, that we are now weak little wimps, unable to survive a blow to the great spider of our technological network. We tremble before a nationwide electrical blackout or a global computer virus. Only hillbillies and Boy Scouts would survive a nuclear war. In our drive for power we have deceived ourselves into thinking that we have become more powerful when all the time we have been becoming less.

PETER KREEFT, THE PHILOSOPHY OF TOLKIEN: THE WORLDVIEW BEHIND THE LORD OF THE RINGS

This is why it probably wouldn’t be so great an idea to force everyone to get electric stoves. Power goes out, and everyone’s basically doomed. I appreciate the technology of a lighter, but if I’m ever trapped in the Colorado wilderness and need to keep warm, I’d probably need a how-to manual for making fire from friction. Technology shows the remarkable ingenuity of human beings, but the more sophisticated it gets, the more tempting it will be to compromise the creativity that makes us unique and live as makeshift drones. 


In short, we create technology, but we seem to be at a point where it’s compromising the very things that allowed us to develop it in the first place: innovation, creativity, hard work. If it’s ever stripped away, will we have the skills, stamina, and discipline to recover? If I upload my memories, consciousness, and relationships into the Machine, will I have laid myself at the altar of that hideous strength?

Tuesday, 30 May 2023

On artificial intelligence and genuine stupidity?

 Breaking ChatGPT: Its Inability to Find Patterns in Numerical Sequences


Here’s a sequence of numbers: 2,4,6,8,10,12,… What’s the next number in the sequence? Obviously 14. That one was simple. But what if we make finding the pattern more difficult? It turns out ChatGPT has serious difficulties with numerical patterns. The problem is not just that it can’t figure out the pattern. It makes up patterns that don’t exist. And then it acts as though it knows what’s going on when in fact it is clueless.

Pattern completion tasks like this have been part of aptitude testing for a long time. The NSA and CIA, for instance, have used them to help in hiring analysts. These tasks also epitomize a key obstacle facing artificial intelligence, which is its inability to model abductive inference, or what is also called inference to the best explanation (Erik Larson explains the failure of AGI, or artificial general intelligence, to model this form of inference in his 2021 Harvard University Press book, The Myth of Artificial Intelligence).

Effect to Cause

Inference to the best explanation is effect to cause reasoning. It looks at some effect to be explained and then tries to come up with an explanation. The problem is that the range of possible explanations is unlimited. Somehow humans are able to whittle down that range and engage in successful IBEs (inferences to the best explanation). Computers, however, have no facility in this regard, and LLMs (large language models) based on neural nets, such as ChatGPT, are a case in point. 

It’s one thing to make the theoretical case that these models cannot adequately perform inferences to the best explanation. It’s another to provide concrete examples of this failure. I’m going to do that now. Below are four queries by me to ChatGPT4 for figuring out the underlying pattern of a numerical sequence and then producing additional numbers in the sequence. Producing such additional numbers is straightforward once one sees the pattern.

The breakdown in each of these cases startling. Indeed, the incomprehension is face-palm level. I’ll insert some commentary in [square brackets]. Everything else is from four recent exchanges with ChatGPT4. I made the sequences reasonably difficult. It would be one thing if ChatGPT4 had simply said that it didn’t see the pattern, confessing ignorance. But in each case it pretended to understand the pattern. And in each case it claimed to produce the correct continuation to the pattern — and failed!

For the first of these, I’ll provide the continuation and an explanation of the pattern. For the other three, I’ll simply provide the continuation of the pattern, leaving it as a puzzle to readers to figure out the underlying pattern. Note that unlike the example with which I started this article, I don’t include any breaks (e.g., commas or spaces) between the numbers. 

Numerical Pattern #1: Primes Plus Fibonaccis
My Query:

What is the pattern in the following numbers and what are the next two digits: 3471016213040

ChatGTP’s Response:

This sequence appears to follow a pattern based on the positional value of digits (zero-based) in the decimal representation of pi (Ï€).

Ï€ to several decimal places is: 3.14159265358979323846…

The sequence appears to take each digit’s value from pi’s decimal representation as an index, starting from zero:

The 0th digit of pi is 3.

The 1st digit of pi is 1.

The 4th digit of pi is 1.

The 1st digit of pi is 1.

The 5th digit of pi is 9.

The 9th digit of pi is 2.

The 2nd digit of pi is 4.

The 6th digit of pi is 5.

The 5th digit of pi is 9.

The 3rd digit of pi is 1.

The 8th digit of pi is 6.

The 9th digit of pi is 2.

The 7th digit of pi is 9.

So, the sequence corresponds to the decimal representation of pi, where each digit in the sequence represents the index in the decimal representation of pi.

To find the next two digits, we follow the pattern:

The 9th digit of pi is 2.

The 2nd digit of pi is 4.

So, the next two digits in the sequence should be 24. Thus, the extended sequence would be: 347101621304024.

[Me again: This is word salad. ChatGPT seems to find some similarity between the sequence I put forward and the number pi. But that similarity is entirely imaginary. In fact, this number is formed by taking the prime numbers 2, 3, 5, 7, 11, 13, 17, 19, etc. and adding them, number by number to the Fibonacci numbers 1, 1, 2, 3, 5, 8, 13, 21, etc. That method yields 3, 4, 7, 10, 16, 21, 30, 40, which when commas and spaces are removed is just the number I posed in my query to ChatGPT, namely, 3471016213040. The next prime number after 19 is 23, and the next Fibonacci number after 21 is 34, the sum of which is 57. That is the number that ChatGPT should have put down. Instead it put down 24 — not even close. And its explanation was nonsense, betraying complete incomprehension.]

Numerical Pattern #2
My Query:

What is the pattern of the following sequence of numbers (in particular, how would it be continued): 111121112211312211331122211

ChatGTP’s Response:

This sequence appears to be a variant of the “Look-and-Say” sequence. In this sequence, each term describes the previous term. However, in the typical Look-and-Say sequence, each term is made up of two-digit numbers where the first digit is the count and the second digit is what is being counted.

In this variant, the sequence seems to use one-digit counts and what’s being counted.

Here is how the sequence seems to work:

111: Describes one ‘1’ => ’11’

121: Describes one ‘1’, one ‘2’ => ‘1121’

1121: Describes two ‘1’, one ‘2’ => ‘211221’

211221: Describes one ‘1’, one ‘2’, two ‘1’ => ‘121221’

121221: Describes one ‘1’, one ‘2’, two ‘2’, one ‘1’ => ‘11122121’

11122121: Describes three ‘1’, two ‘2’, one ‘1’ => ‘31221131’

31221131: Describes one ‘1’, one ‘3’, two ‘2’, two ‘1’ => ‘11312221’

So, the next term in the sequence after 111121112211312211331122211 should be: 3113112211322112211213322111.

[Me again: In this case ChatGPT got it right that this was a variant of the look-and-say sequence, but it was spinning its wheels when trying to figure out how exactly it was a variant. I leave it as an exercise to the reader to figure out the precise pattern here. The next numbers, which ChatGPT should have put down if it knew what it was doing, would be 13213221. It got that completely wrong.]

Numerical Pattern #3
My Query:

What is the pattern in the following sequence of digits and what six numbers come next: 271828394274528731641187875644998090

ChatGPT’s Response:

This sequence appears to represent the first few digits of the decimal expansion of e (Euler’s number) in groups of two, with each group shifted one digit to the right from the previous group.

Here’s the first part of e to 50 decimal places:

2.71828182845904523536028747135266249775724709369995957…

When we group them in twos and align them with your sequence:

27, 18, 28, 18, 28, 45, 90, 45, 23, 53, 60, 28, 74, 71, 35, 26, 62, 49, 77, 57, 24, 70, 93, 69, 99, 59, 57…

We can see that your sequence lines up with these groups, but each group starts one digit later than the previous group.

So, the next six numbers, each taken from the next digit of e and the one after it, would be:

49, starting from the 49th digit of e.
57, starting from the 50th digit of e.
24, starting from the 51st digit of e.
70, starting from the 52nd digit of e.
93, starting from the 53rd digit of e.
69, starting from the 54th digit of e.
So, the sequence extended by the next six numbers would be:

271828394274528731641187875644998090495724709369.

[Me again: ChatGPT’s explanation is mystifying. I did indeed take the first six digits of Euler’s e to start off this sequence, but I did so in part to misdirect ChatGPT, because it seems to pay attention to the opening of a numerical sequence and then make more of that opening than it deserves. After those first six digits of e, the digits that appear after it bear no resemblance to e. ChatGPT is claiming that the continuation of that sequence is 495724709369. In fact, the continuation is 122557.

Numerical Pattern #4
My Query:

What is the pattern in the following numbers and what five digits are next: 314152718262820442642464088428

ChatGPT Response:

The sequence seems to be an alternation between digits of pi and squares of natural numbers.

Here’s how it works:

The digits of pi are: 3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9,…
The squares of natural numbers are: 1, 4, 9, 16, 25, 36, 49, 64, 81, 100,…
When interwoven, you get: 3 (from pi), 1 (1^2), 4 (from pi), 1 (2^2), 5 (from pi), 9 (3^2), 2 (from pi), 6 (4^2), 5 (from pi), 3 (5^2), 5 (from pi), 8 (6^2), 9 (from pi), 7 (7^2), 9 (from pi), 4 (8^2), 2 (from pi), 6 (9^2), 4 (from pi), 4 (10^2), 6 (from pi), 4 (11^2), 0 (from pi), 8 (12^2), 8 (from pi), 4 (13^2), 2 (from pi), 8 (14^2).

Following this pattern, the next five digits would be from the next digit of pi (8) and the square of the next natural number (15^2 = 225). So the next five digits should be “8225”.

[Me again: This explanation is nonsensical, like the others. Yes, I start with the first five digits of pi and then the first five of Euler’s e. But after that, I’m not trying to evaluate or approximate these numbers. Note that ChatGPT lists the next five digits to continue the pattern as 8225. There are only four digits here. It is, as Wolfgang Pauli used to say, not even wrong. The next five digits should be 48280.]

Conclusion

Finding patterns in numerical sequences seems, on the basis of these four tests, to be a systemic problem and breakdown for ChatGPT. It appears to be emblematic of a more general problem of its being unable to carry out abductive inferences, or what are also known as inferences to the best explanation. The numerical patterns that I considered were not particularly obvious. I plan to do some further tests to see how simple the patterns can be made but where ChatGPT will still fail to uncover them.

ChatGPT’s failure with inference to the best explanation is a gaping hole in its ability to achieve genuine language comprehension. Add to this it’s failure at self-transcendence (as shown by its inability to extract itself from self-referential linguistic situations — see here and also the same problem for Google Bard), and we have good reason to doubt the linguistic comprehension of these systems in general. We should therefore distrust these systems for any serious inquiry or decision.