Search This Blog

Showing posts with label Mind over matter. Show all posts
Showing posts with label Mind over matter. Show all posts

Friday, 9 May 2025

Neanderthals might be smarter than darwinists?

 Slow-Witted? Neanderthals Invented Their Own Tech — Didn’t Copy


Here is another instance of Neanderthals, once thought to be comparatively slow-witted, taking the lead in developing a technology. At Ars Technica, science writer Kiona N. Smith notes:

Archaeologists recently unearthed a bone projectile point someone dropped on a cave floor between 70,000 and 80,000 years ago — which, based on its location, means that said someone must have been a Neanderthal.

The point (or in paleoarchaeologist Liubov V. Golovanova and colleagues’ super-technical archaeological terms, “a unique pointy bone artifact”) is the oldest bone tip from a hunting weapon ever found in Europe. It’s also evidence that Neanderthals figured out how to shape bone into smooth, aerodynamic projectiles on their own, without needing to copy those upstart Homo sapiens. Along with the bone tools, jewelry, and even rope that archaeologists have found at other Neanderthal sites, the projectile is one more clue pointing to the fact that Neanderthals were actually pretty sharp. 

“NEANDERTHALS INVENTED THEIR OWN BONE WEAPON TECHNOLOGY BY 80,000 YEARS AGO,” MAY 2, 2025

Not Grandpa’s Neanderthal Anymore

The bone tip was found in Mezmaiskaya Cave (pictured at the top) in the Caucasus Mountains. From the paper’s Abstract:

The results suggest an independent invention of bone-tipped hunting weapons by Neanderthals in Europe long before the arrival of Upper Paleolithic modern humans to the continent, and also show that the production technology of bone-tipped hunting weapons used by Neanderthals was in the nascent level in comparison to those used and introduced to Eurasia by modern humans. 

LIUBOV V. GOLOVANOVA ET AL, ON THE MOUSTERIAN ORIGIN OF BONE-TIPPED HUNTING WEAPONS IN EUROPE: EVIDENCE FROM MEZMAISKAYA CAVE, NORTH CAUCASUS, JOURNAL OF ARCHAEOLOGICAL SCIENCE (2025). DOI: 10.1016/J.JAS.2025.106223

Science writer Bob Yirka comments at Phys.org, “The finding of the spear tip upends theories suggesting that Neanderthals never advanced past stone tools. It also shows, the team suggests, that Neanderthals were able to plan ahead, not only in making the tool, but in the way it was used.”

Neanderthals cannot be the missing link that many paleontologists are looking for. But if the human mind has no history, there is no missing link.

Saturday, 3 May 2025

On designed intelligence

 Can We Credit Human Creativity to Blind Evolution?


Does the evolution of brain chemistry explain novels, speeches, and innovative ideas? Pat Flynn explored that critical question with Dr. Eric Holloway and Professor Robert J. Marks in a recent episode of the Mind Matters podcast. Delving into a chapter from their book Minding the Brain, they focused on the “information cost” of creativity. They argue that the complexity involved in generating meaningful phrases surpasses the computational resources of the universe. That is a challenge to naturalistic explanations and suggests a need for an external source of creativity.

Defining Creativity

Creativity is defined through the lens of the Lovelace test, proposed by Selmer Bringsjord, which posits that a creative act by a computer must exceed the intent or explanation of its programmer. Dr. Marks emphasizes that artificial intelligence, including large language models, operates within the bounds of programmed instructions, lacking true creativity.

Dr. Holloway distinguishes creativity from randomness. Creativity cannot be reduced to probabilistic distributions because random processes lack the intentionality required for meaningful output. This distinction sets the stage for questioning whether evolutionary processes, often equated with randomness, can account for human creativity.

The Information Cost of Meaning

Mountain climbing is a useful metaphor to illustrate the challenge of generating meaningful phrases. The “summit” represents a meaningful phrase, and the “climb” represents the process of reaching it through random selection of letters from a 27-character alphabet (including spaces). Meaning is broadly defined as any string of letters corresponding to words in a dictionary.

The authors argue that creating meaningful phrases requires an extraordinarily high amount of information. They calculate that even with the universe’s computational capacity — estimated at 10^244 bits based on Planck cubes and Planck time units — only a 268-character phrase could be generated randomly. Even hypothesizing parallel universes (up to 10^1000) would only marginally increase this number to 1,380 characters, underscoring the exponential difficulty of the task.

The Role of Active Information

Active information is introduced as the guidance needed to navigate the metaphorical mountain. Without it, random processes are as likely to succeed as any other path, per the No Free Lunch theorem.

Dr. Marks illustrates this with an analogy: finding an Easter egg in Wyoming. Without accurate guidance, the search is futile. An active information source like “you are getting warmer” or “you are getting colder” is needed to find the egg.

If we use the mountain climbing metaphor instead, active information is like an escalator on the side of the mountain that lets you reach the summit more easily.

Metaphysical Considerations

The  discussion also touches on a metaphysical argument by philosopher Richard Taylor (1919‒2003), who argued that meaning cannot arise from random processes. Using the example of rocks forming the phrase “Welcome to Wales” by chance, Taylor argues that such an arrangement, if truly random, lacks intentionality and thus cannot convey meaning. Even if random processes could produce such a meaningful arrangement, the absence of a mind behind it negates its semantic content. Creativity requires a non-random, intentional source.

Implications for Creativity’s Source

The findings challenge naturalistic accounts of creativity, suggesting that the ability to generate meaningful phrases exceeds the universe’s computational resources. The authors propose that human creativity, suffused with semantic and intentional content, points to a non-material or external source of active information.

The need for active information implies an intelligent design, potentially guiding evolutionary processes or directly enabling human creative capacities.

Take Away

The Mind Matters podcast discussion casts doubt on the ability of evolutionary processes to account for human creativity. By demonstrating the immense information cost of even simple meaningful phrases and the necessity of active information, Holloway and Marks demonstrate that naturalistic explanations fall short. Their work invites further exploration into the origins of creativity, noting that the genius of the human mind requires an external, intelligent source beyond the material world.

Monday, 21 April 2025

Even mouse brain for the win?

 Even a Mouse Brain Reveals Staggering Complexity


The science media have been ablaze recently with a major achievement: Princeton neuroscientists have mapped the staggering complexity of a cubic millimetre of the visual area of a mouse’s brain — about a poppy seed’s worth. That was a remarkable achievement on account of the complexity of even a mouse’s brain.

Mouse Meets Matrix

PBS tells us that the mouse gave the scientists’ data by watching, among other things, The Matrix: (1999):

Thanks to a mouse watching clips from “The Matrix,” scientists have created the largest functional map of a brain to date — a diagram of the wiring connecting 84,000 neurons as they fire off messages.

Using a piece of that mouse’s brain about the size of a poppy seed, the researchers identified those neurons and traced how they communicated via branch-like fibers through a surprising 500 million junctions called synapses.

The massive dataset, published Wednesday by the journal Nature, marks a step toward unraveling the mystery of how our brains work. The data, assembled in a 3D reconstruction colored to delineate different brain circuitry, is open to scientists worldwide for additional research — and for the simply curious to take a peek. 

“HOW A MOUSE WATCHING ‘THE MATRIX’ HELPED SCIENTISTS CREATE THE LARGEST MAP OF A BRAIN TO DATE,” APRIL 9, 2025. THE PAPERS FROM THE PROJECT ARE HERE

At  the New York Times, science writer Carl Zimmer provides a bit of perspective.

The human brain is so complex that scientific brains have a hard time making sense of it. A piece of neural tissue the size of a grain of sand might be packed with hundreds of thousands of cells linked together by miles of wiring. In 1979, Francis Crick, the Nobel-prize-winning scientist, concluded that the anatomy and activity in just a cubic millimeter of brain matter would forever exceed our understanding.

“It is no use asking for the impossible,” Dr. Crick wrote.

Forty-six years later, a team of more than 100 scientists has achieved that impossible, by recording the cellular activity and mapping the structure in a cubic millimeter of a mouse’s brain — less than one percent of its full volume. In accomplishing this feat, they amassed 1.6 petabytes of data — the equivalent of 22 years of nonstop high-definition video. 

“AN ADVANCE IN BRAIN RESEARCH THAT WAS ONCE CONSIDERED IMPOSSIBLE”, APRIL 9, 2025

At the New York Times, science writer Carl Zimmer provides a bit of perspective:

The human brain is so complex that scientific brains have a hard time making sense of it. A piece of neural tissue the size of a grain of sand might be packed with hundreds of thousands of cells linked together by miles of wiring. In 1979, Francis Crick, the Nobel-prize-winning scientist, concluded that the anatomy and activity in just a cubic millimeter of brain matter would forever exceed our understanding.

“It is no use asking for the impossible,” Dr. Crick wrote.

Forty-six years later, a team of more than 100 scientists has achieved that impossible, by recording the cellular activity and mapping the structure in a cubic millimeter of a mouse’s brain — less than one percent of its full volume. In accomplishing this feat, they amassed 1.6 petabytes of data — the equivalent of 22 years of nonstop high-definition video. 

“AN ADVANCE IN BRAIN RESEARCH THAT WAS ONCE CONSIDERED IMPOSSIBLE”, APRIL 9, 2025

Materialism’s Last Stop

The underlying message of Zimmer’s article is that the human brain is really the same sort of thing, just more complex, and we will reduce it to a map. And, although no one quite says it, the human mind is to be understood as merely the output of a complex brain. The news release from Princeton made that clear:

“It’s just a beginning,” [team co-lead] Seung said. “But it’s opening the door to a new era of realistic brain simulations. And so the next question becomes — and people will ask — can that ever be done with a human brain? And then the next question is, well, even if you could simulate a human brain, and it was very faithful, would it be conscious?”

When asked what he thought about it, he laughed. “I don’t have any more authority to make a statement on that than you do. But when people say, ‘I don’t believe a simulation of a brain would be conscious,’ then I say, ‘Well, how do you know you’re not a simulation?’” 

SCOTT LYON, “SCIENTISTS MAP THE HALF-BILLION CONNECTIONS THAT ALLOW MICE TO SEE,” APRIL 9, 2025

The  problem with Seung’s reasoning is, of course, this: If we don’t know that we are not simulations, we also don’t know that anything we think we know is real. Life in The Matrix is a high price to pay in order to maintain a materialist view of the mind.

And if it takes a hundred scientists to map a cubic millimeter of a mouse’s brain, even the material world — never mind the immaterial world of the mind — is not likely to have a simple explanation.

Sunday, 20 April 2025

The fall of reductionism?

 Are “Mind” and “Brain” the Same Thing?


In a thought-provoking interview hosted by Wesley J. Smith for the Humanize podcast, three scholars — philosopher Angus Menuge, neurosurgeon Michael Egnor, and engineer Brian Krouse — explore the relationship between the mind and the brain, a subject of the recent book Minding the Brain. The conversation revolves around some of the most profound questions in science and philosophy: What is the mind? Is it reducible to the brain? Do we have free will? And how do humans differ from machines and animals? Download the podcast or listen to it here.

The Mind Beyond Measurement

Menuge begins by addressing a fundamental limitation in neuroscience: while brain activity can be correlated with emotional or cognitive states, thoughts themselves cannot be localized in space the way physical objects can. This distinction leads naturally into the concept of dualism — the philosophical view that the mind is distinct from the physical brain.

As host, Smith raises a common question: must dualists also be theists? Menuge clarifies the fact that one can embrace dualism from a purely secular standpoint. Many philosophers have concluded from experience and introspection that mental phenomena cannot be reduced to neural mechanisms — regardless of their theological commitments.

The Free Will Debate

Much of the discussion centers on free will, which the participants see as a defining trait of human beings. Dr. Egnor delves into the famous experiments by Benjamin Libet (1916–2007), which suggested that the brain initiates actions before we become consciously aware of our decisions. But he also found that people could veto actions their brains had already initiated, a phenomenon he termed “free won’t.”

Egnor passionately argues that denying free will undermines moral responsibility and paves the way for totalitarian ideologies. He lists five reasons to affirm free will, including its universality in human experience, the logical inconsistency of denying it, and new physics that disproves classical determinism.  

Are We Just Fancy Computers?

Brian Krouse and Angus Menuge tackle the increasingly popular notion that the brain is merely a computer, and that the mind is nothing more than information processing. They say no. Despite massive advances in computational neuroscience, even simple organisms like the worm C. elegans, with just a few hundred neurons, resist complete computational modeling. If we can’t fully understand a nematode, how much less can we claim mastery over the human brain, which contains trillions of connections?

Egnor, who is author along with Denyse O’Leary of The Immortal Mind (June 3, 2025), adds a philosophical dimension to this critique. Drawing on the concept of intentionality — the “aboutness” of thought — he argues that the mind is categorically unlike computation. Thoughts carry meaning, while computation is blind to meaning. A word processor doesn’t care what you type; it just processes symbols. This, he claims, shows the mind is not only distinct from computation but its opposite.

Human Uniqueness and the Limits of AI

The discussion also touches on the rise of large language models like ChatGPT. While these systems appear intelligent, Krouse emphasizes that their “hallucinations” (errors) reveal their lack of true understanding. They do not grasp meaning; they merely complete patterns. Menuge underscores the danger of forgetting the fundamental distinction between machines and human creativity. The capacity for abstraction, purpose, and moral reasoning sets humans apart from both animals and AI.

Egnor illustrates this difference through a humorous but profound observation: his dog finds deep meaning in the smell of bacon, but she doesn’t reflect on nutrition or ethics. Human cognition, by contrast, includes abstract thought, moral deliberation, and metaphysical inquiry.

Why It Matters

As the conversation concludes, Menuge insists that these philosophical debates are not merely academic. Understanding the mind’s immaterial nature opens up new horizons for science and human flourishing. Our ability to transcend our biological limitations — to think about universal truths, moral ideals, and the cosmos itself — is central to what makes us human.

The guests all stress that a recent book on these themes, Minding the Brain, is not dogmatic but exploratory. The book presents diverse perspectives and invites open-minded inquiry. As Smith remarks, this is precisely what science and philosophy ought to be: a generous dialogue among differing views in pursuit of deeper truth.

Sunday, 17 March 2024

The mind is as real as the brain?

 Consciousness Observes Different Laws from Physics


Robert Lawrence Kuhn interviewed British philosopher and pastor Keith Ward on “What’s the Stuff of Mind and Brain?” Ward is an idealist philosopher who “believes that the material universe is an expression or creation of a Supreme Mind, namely that of God.” 

He explains how we can know that the mind is not simply what the brain does. One way is that the mind or consciousness functions according to different rules:

Kuhn: [5:53] Keith, what is it that we need to combine with the brain to make this non-material consciousness?

Ward: [6:04] Well, you need — what Buddhists would say is — thoughts and feelings and sensations and perceptions. And this is a stream of, believe it or not, consciousness. And that is something which is at least partly produced by the brain. It’s causally correlated with events in the brain, that is to say, but it also has its own psychical or spiritual or mental forms of causation.

So let me give you one example. [6:35] If I go through a mathematical calculation, I don’t know what’s happening in my brain at all. And I don’t believe that when I get a logically correct result and I say — amazingly, 2 plus two does equal 4 — I don’t believe that that is produced by purely physical laws in the brain. It is a logical calculation and there are laws of thought which produce it. So that’s what you need.

Kuhn: [6:57] So Keith, do you need something like a soul to combine with the brain to make consciousness?

Ward: [7:04] That’s a loaded word. I think the most important distinction I would make is between the laws of physics, which are mechanical in the sense they’re not directed, they’re not for the sake of anything, they’re just proceeding in accordance with mathematical equations … To contrast the laws of physics with the laws of thought, which you use in mathematical calculations for example, … you’ve got a criterion of correctness… the laws of mathematical and logical thinking are not reducible to or statable in terms of laws of physics or of any known science. So there must at least be two completely different ways of understanding what human beings are, a physical way and a way concerned with thinking — and I would say feeling and perception as well. And these you have to put these two together and I believe that nobody on Earth knows how to do that.

Ward is stressing that it is only in the intellectual world that concepts like correct vs. incorrect (or right vs. wrong) are meaningful. That’s a different world from the one created by physics. The unacknowledged difference between the two is one of the reasons materialist philosophies are not working out well in the study of consciousness.


Tuesday, 12 March 2024

The ego that cogitates is beyond the grasp of the physical sciences?

 “Lived Experience” Is Science’s Blind Spot


Seriously, last month we noted an article by University of Rochester astrophysicist Adam Frank at Big Think. There he protested the use of the term “hallucinate” to describe absurd chatbot glitches: “Its mistake is not a matter of making a false statement about the world because it doesn’t know anything about the world. There is no one in there to know anything about anything.”

In that short essay, he mentioned that he and two colleagues — Dartmouth College theoretical physicist Marcelo Gleiser and philosopher Evan Thompson — would publish a book this month, The Blind Spot: Why Science Cannot Ignore Human Experience offering a bigger picture. Now that the book is out, they talk a bit more about it:

Cosmology tells us that we can know the Universe and its origin only from our inside position, not from the outside. We live within a causal bubble of information — the distance light traveled since the Big Bang — and we cannot know what lies outside. Quantum physics suggests that the nature of subatomic matter cannot be separated from our methods of questioning and investigating it. In biology, the origin and nature of life and sentience remain a mystery despite marvelous advances in genetics, molecular evolution, and developmental biology. Ultimately, we cannot forgo relying on our own experience of being alive when we seek to comprehend the phenomenon of life. Cognitive neuroscience drives the point home by indicating that we cannot fully fathom consciousness without experiencing it from within. 

ADAM FRANK AND MARCELO GLEISER AND EVAN THOMPSON, THE “BLIND SPOT” IN SCIENCE THAT’S FUELING A CRISIS OF MEANING, BIG THINK, MARCH 7, 2024

The Heart of Science

What about the grand narratives of science? “At the heart of science lies something we do not see that makes science possible, just as the blind spot lies at the heart of our visual field and makes seeing possible.”

The tragedy the Blind Spot forces on us is the loss of what’s essential to human knowledge — our lived experience. The Universe and the scientist who seeks to know it become lifeless abstractions. Triumphalist science is actually humanless, even if it springs from our human experience of the world. This disconnection between science and experience, the essence of the Blind Spot, lies at the heart of the many challenges and dead ends science currently faces in thinking about matter, time, life, and the mind. 

FRANK, GLEISER AND THOMPSON, A CRISIS OF MEANING

What Gets Ignored

They are right about the dead ends. But is it true that the dead ends result merely from ignoring human experience? Surely, what’s ignored (or, more usually, denied or forbidden for discussion) is the immaterial nature of the human mind. Also off the table are questions like whether a cosmos where some beings (ourselves) clearly have immaterial intelligence can be created if an Intelligence does not underlie the universe. It’s quite likely that some fundamental questions cannot be answered within the allowed materialist framework.

But it’s interesting to see that these three thinkers are posing the questions — at least in this essay — in an open-ended way, almost as if they sense that dredging up pat materialist answers that don’t really work won’t help much.

Saturday, 24 February 2024

Determinism is theatre?

 Reply to Free Will Deniers: Show Me


Free will denial is a cornerstone of materialist–determinist ideology. We are, say the deniers, purely physical machines, meat robots.

Atheist-materialist evolutionary biologist Jerry Coyne is a prominent proponent of deterministic free will denial, and there are many others — philosopher Stephen Cave, biologist Robert Sapolsky, author Sam Harris, attorney Clarence Darrow, to name just a few.

From Harris:

How can we be “free” as conscious agents if everything that we consciously intend is caused by events in our brain that we do not intend and of which we are entirely unaware?… My choices matter — and there are paths towards making wiser ones — but I cannot choose what I choose. And if it ever appears that I do — for instance, after going back between two options — I do not choose to choose what I choose. There is a regress here that always ends in darkness.

Free will deniers invariably acknowledge that we have the ineluctable sense of freely choosing, and that our belief in free will is a cornerstone of human psychology, of our social interaction, of our moral codes and of our judicial system. Nonetheless, deniers claim, we are deluded. We are not free at all — we are slaves to the laws of physics and chemistry that govern the physiology of our brains.

How Do We Know What Words Really Mean?

What to make of this bizarre viewpoint that we have no genuine freedom to choose — a viewpoint that is contrary to the lived experience of every human being? It is helpful to consider the question on a different level — not “do we have free will?,” but rather “what does it mean to believe we don’t have free will?”.

What does it mean to believe anything? Philosopher Ludwig Wittgenstein (1889–1951) critiqued our conventional understanding of the “meaning” of words and I think he sheds light on what both meaning and belief really are. He pointed out (in his middle work, most notably The Blue Book ) that it is confused to say that the meaning of a word is assigned via an interior mental act or act of interpretation. Why do we attribute the meaning of a word to brain physiology, when we could just as plausibly attribute it to the physiology of the larynx, tongue or hand when we speak or write the word?

Meaning, according to Wittgenstein, is just the way the word is used in life. Meaning, in a sense, is use. It is common for a word to have several different meanings, depending on the context in which it is used.

Even the word believe itself has several meanings depending on use — “I believe it’s going to rain,” “I believe in you,” “I believe that I will have a ham sandwich,” etc. The difference in the meaning of believe in these instances is in the context of use — what we mean by believe is determined by the context (the gestalt) in which we use the word. To believe something is to behave in a certain way.

Belief is behavior. The belief-behavior can include speaking or writing the belief, of course, but belief is behavior in a much broader sense than merely speaking. Belief is what you do, not merely what you say. Consider the statement by a serial adulterer “I believe in fidelity and chastity.” Of course, such a claim is not credible, because his behavior makes a mockery of that belief. Serial adulterers believe in serial adultery (otherwise, they wouldn’t do it), just as embezzlers believe in embezzlement and philanthropists believe in philanthropy. Belief is much more than words — it is, to use Wittgenstein’s phrase, a form of life. Belief is a way of living.

So, do free will deniers really believe that free will isn’t real? Of course not. Free will deniers live as if free will is real, despite their proclamations and their blog posts. What matters is what they do, not merely what they say. Every human being lives life as if free will is real. We all believe — as demonstrated by our behavior — in the fact that we choose some options and not others, that we have real moral accountability, that there is such a thing as justice. No one (outside of a mental hospital) really believes that we are meat robots without free will.

If you want to know what a free will denier really believes, steal his laptop or dent his fender and see if he holds you morally accountable.

So What’s Free Will Denial Really All About?

So what are free will deniers really doing when they say that they don’t believe in free will, but never act like free will isn’t real? Free will denial is determinist signaling, in which materialists flaunt their bona fides. It is analogous to a political yard sign or a cross worn around the neck.

It’s a way of announcing to the world who you are — whether or not you really believe (i.e., behave in accordance with) your politics or your faith. The difference between a political belief expressed on a sign or faith expressed via a pendant and free will denial is that sometimes the sign or cross do correspond to a way of life, and thus are real expressions of belief. Free will denial, on the other hand, never constitutes genuine belief, because it is not possible to live as if free will isn’t real.

Free Will Denial as Performance Art

Materialists don’t really mean it because they never do it. To truly believe that free will isn’t real — to believe that our actions are wholly determined by our brain chemicals, for which we have no moral responsibility whatsoever— is to utterly abandon any real sense of morality, to deny not only the salience but even the meaning of right and wrong behavior. It means to live every moment as if you and all people on earth are meat robots, utterly devoid of choice or free agency. A person who really believed that free will isn’t real wouldn’t hold a murderer morally responsible for murder, any more than the gun or the bullet is. If you carelessly dent a genuine free will denier’s car in a parking lot, he wouldn’t hold you responsible any more than he’d hold your car responsible.

So the next time a LARPing materialist declares to you that he doesn’t believe in free will, say this: “Your free will denial is performance art. What you do is immeasurably louder than what you say. You don’t really believe that free will isn’t real, unless you live like it isn’t real.”

Sunday, 18 February 2024

The mind contemplates itself?

 Consciousness, a Hall of Mirrors, Baffles Scientists


To contemplate consciousness is, as professor of religion Greg Peterson put it, like looking into and out of a window at the same time. No surprise then that philosophers of science call it the Hard Problem of Consciousness. The inexorable progress of brain imaging was supposed to dissolve the conundrum but we spoil no surprise by saying that new information and insights only deepened it.

Among the many quests, one has been to discover the seat of consciousness. An image rises unprompted. Seat? Does consciousness have a seat at the table? Wait a minute. Isn’t consciousness the table? You see the difficulty, of course. At any rate, the search is for the specific bit of the brain that spews out the unthinking electrical charges that create consciousness.

It’s been a long and winding road. Brain imaging has not turned out to be a road map of the mind. For example, functional MRI imaging only tells researchers where blood is traveling in the brain. The problem is, as a Duke University research group pointed out, “the level of activity for any given person probably won’t be the same twice, and a measure that changes every time it is collected cannot be applied to predict anyone’s future mental health or behavior.”

Rise and Fall of the Lizard Brain

The most widely popularized theory of mind — the triune brain theory — depends on organization rather than imaging. Originally developed by Yale University physiologist and psychiatrist Paul D. MacLean (1913–2007) decades ago and promoted by celebrity skeptic Carl Sagan (1934–1996), it divides the brain into three parts. The reptilian brain controls things like movement and breathing, the mammalian brain controls emotion, and the human cerebral cortex controls language and reasoning.

This approach resulted in immensely reassuring ideas; for example, a widely disliked boss or politician morphed into a “dinosaur brain.” In 2021, Jeff Hawkins, inventor of the PalmPilot (a smartphone predecessor) even claimed to have figured out how human intelligence works, relying on his model of the mammalian brain.

The human brain was bound to disappoint pop culture in this matter because key functions are distributed throughout. Also triune brain theory doesn’t square with the high animal intelligence recently found in (non-vertebrate) octopuses. Claims for the mammalian brain in particular don’t square with the high intelligence found in some birds. Let alone with the fact that human consciousness remains an absolute outlier.

But MacLean’s idea has proven much too culturally satisfying to be spoiled by mere neuroscience. As one research team notes, “despite the mismatch with current understandings of vertebrate neurobiology, MacLean’s ideas remain popular in psychology. (A citation analysis shows that neuroscientists cite MacLean’s empirical articles, whereas non-neuropsychologists cite MacLean’s triune-brain articles.)”

It’s All in the Connections

Never mind, the exciting new world of -omes (genomes, epigenomes, biomes…) beckons. The connectome — essentially, a complete “wiring diagram” of the brain, might possibly identify human consciousness. In 2010, computational neuroscientist Sebastian Seung told humanity, “I am my connectome,” a thought on which he expanded in his 2012 book, Connectome: How the Brain’s Wiring Makes Us Who We Are. In 2012, National Institutes of Health director Francis Collins was thinking along the same lines: “Ever wonder what is it that makes you, you? Depending on whom you ask, there are a lot of different answers, but these days some of the world’s top neuroscientists might say: ‘You are your connectome.’”

That moment has passed. Harvard neuroscientist Jeff Lichtman, who is trying to map the brain, surveys the awful complexity nearly a decade later and sums up,

…if I asked, “Do you understand New York City?” you would probably respond, “What do you mean?” There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, “I now understand the brain,” just as you wouldn’t say, “I now get New York City.”

GRIGORI GUITCHOUNTS, “AN EXISTENTIAL CRISIS IN NEUROSCIENCE,” NAUTILUS, JANUARY 22, 2020

In short, once we are into abstractions, we are no longer dealing with the concrete substance of the brain.

It’s All in the Electricity

But what about the bioelectric fields that swarm throughout the brain? Bioelectric currents, unlike electric currents, rely on ions rather than electrons but they are still electricity. Evolutionary biologist and lawyer Tam Hunt tells us, “Nature seems to have figured out that electric fields, similar to the role they play in human-created machines, can power a wide array of processes essential to life. Perhaps even consciousness itself.” That’s a remarkable idea because it includes the notion that our individual cells exhibit consciousness: “Something like thinking, they argue, isn’t just something we do in our heads that requires brains. It’s a process even individual cells themselves, and not requiring any kind of brain, also take part in.”

This sounds cool but gets us nowhere. We have no reason to believe that our individual brain cells are conscious; what we know is that we are conscious as whole human beings. We could say the same about claims that everything is conscious (panpsychism) or that nothing is (eliminativism). Whatever else the claims do, they shed no light on the conundrum at hand.

Consciousness as an Undetected State of Matter

Max Tegmark, MIT physicist and author of Our Mathematical Universe: My Quest for the Ultimate Nature of Reality (Knopf, 2014), goes still further. He suggests that consciousness is a so far undetected state of matter, perceptronium, “defined as the most general substance that feels subjectively self-aware.” Which, again, gets us precisely nowhere.

Prominent neuroscientist Christof Koch notes more mundanely that physical distance in the brain matters: “A new study documents an ordering principle to these effects: the farther removed from sensory input or motor output structures, the less likely it is that a region contributes to consciousness.” And that’s about as far as neuroscience has got.

Koch has also written a book, The Feeling of Life Itself (MIT Press, 2019), where he tells us, among many other things, of dogs, Der Ring des Nibelungen, sentient machines, the loss of his belief in a personal God, and sadness, all seen as “signposts in the pursuit of his life’s work — to uncover the roots of consciousness.” And that is where we must leave the subject for now. We are back where we started — but we do have interesting books.

Thursday, 8 February 2024

The inspiration and creativity of actual intelligence vs. Running of algorithmic programs by artificial intelligence.

 Artificial General Intelligence: The Oracle Problem


In computer science, oracles are external sources of information made available to otherwise self-contained algorithmic processes. Oracles are in effect “black boxes” that can produce a solution for any instance of a given problem, and then supply that solution to a computer program or algorithm. For example, an oracle that could provide tomorrow’s price for a given stock could be used in an algorithm that today — with phenomenal returns — executes buy-and-sell orders for that stock. Of course, no such oracle actually exists (or if it does, it is a closely guarded secret). 

The point of oracles in computer science is not whether they exist but whether they can help us study aspects of algorithms. Alan Turing proposed the idea of an oracle that supplies information external to an algorithm in his 1938 doctoral dissertation. Some oracles, like tomorrow’s stock predictor, cannot be represented algorithmically. Others can, but the problems they solve may be so computationally intensive that no real-world computer could solve them. The concept of an oracle is important in computer science for understanding the limits of computation.

“Sing, Goddess, of the Anger of Achilles”

Turing’s choice of the word “oracle” was not accidental. Historically, oracles have denoted sources of information where the sender of the information is divine and the receiver is human. The Oracle of Delphi stands out in this regard, but there’s much in antiquity that could legitimately count as oracular. Consider, for instance, the opening of Homer’s Iliad: “Sing, goddess, of the anger of Achilles, son of Peleus.” The goddess here is one of the muses, presumably Calliope, the muse of epic poetry. In the ancient world, the value of artistic expression derived from its divine inspiration. Of course, prophecy in the Bible also falls under this conception of the oracular, as does real-time divine guidance of the believer’s life (as described in Proverbs 3:5–6 and John 16:13). 

Many of us are convinced that we have received information from oracles that can’t be explained in terms of everyday communication among people or everyday operations of the mind. We use many words to describe this oracular flow of information: inspiration, intuition, creative insight, dreams, reverie, collective unconscious, etc. Sometimes the language used is blatantly oracular. Einstein, for instance, told his biographer Banesh Hoffmann, “Ideas come from God.” Because Einstein did not believe in a personal God (Einstein would sometimes say he believed in the God of Spinoza), Hoffmann interpreted Einstein’s remark metaphorically to mean, “You cannot command the idea to come. It will come when it’s good and ready.” 

The Greatest Mathematician of His Age

Now granted, computational reductionists will dismiss such oracular talk as misleading nonsense. Really, all the information is there in some form already in the computational systems that make up our minds, and even though we are not aware of how the information is being processed, it is being processed nonetheless in purely computational and mechanistic ways. Clearly, this is what computational reductionists are bound to say. But the testimony of people in which they describe themselves as receiving information from an oracular realm needs to be taken seriously, especially if we are talking about people of the caliber of Einstein. Consider, for instance, how Henri Poincaré (1854–1912) described the process by which he made one of his outstanding mathematical discoveries. Poincaré was the greatest mathematician of his age (in 1905 he was awarded the Bolyai Prize ahead of David Hilbert). Here is how he described his discovery:

For fifteen days I strove to prove that there could not be any functions like those I have since called Fuchsian functions. I was then very ignorant; every day I seated myself at my work table, stayed an hour or two, tried a great number of combinations and reached no results. One evening, contrary to my custom, I drank black coffee and could not sleep. Ideas rose in crowds; I felt them collide until pairs interlocked, so to speak, making a stable combination. By the next morning I had established the existence of a class of Fuchsian functions, those which come from the hypergeometric series; I had only to write out the results, which took but a few hours. Then I wanted to represent these functions by the quotient of two series; this idea was perfectly conscious and deliberate, the analogy with elliptic functions guided me. I asked myself what properties these series must have if they existed, and I succeeded without difficulty in forming the series I have called theta-Fuchsian.

Just at this time I left Caen, where I was then living, to go on a geologic excursion under the auspices of the school of mines. The changes of travel made me forget my mathematical work. Having reached Coutances, we entered an omnibus to go some place or other. At the moment when I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’ sake I verified the result at my leisure.

Again, the computational reductionist would contend that Poincaré’s mind was in fact merely operating as a computer. Accordingly, the crucial computations needed to resolve his theorems were going on in the background and then just happened to percolate into consciousness once the computations were complete. But the actual experience and self-understanding of thinkers like Einstein and Poincaré, in accounting for their bursts of creativity, is very different from what we expect of computation, which is to run a computer program until it yields an answer. Humanists reject such a view of human creativity. Joseph Campbell, in The Power of Myth, offered this rejoinder to computational reductionism: “Technology is not going to save us. Our computers, our tools, our machines are not enough. We have to rely on our intuition, our true being.” Of course, artists of all stripes have from ages past to the present invoked muses of one form or another as inspiring their work. 

A Clash of Worldviews?

Does this controversy over the role of oracles in human cognition therefore merely describe a clash of worldviews between a humanism that refuses to reduce our humanity to machines and a computational reductionism that embraces such a reduction? Is this controversy just a difference in viewpoints based on a difference in first principles? In fact, oracles pose a significant theoretical and evidential challenge to computational reductionism that goes well beyond a mere collision of worldviews. Computational reductionism faces a deep conceptual problem independent of any worldview controversy.

Computational reductionism faces an oracle problem. The problem may be described thus: Our most advanced artificial intelligence systems, which I’m writing about in this series about Artificial General Intelligence (AGI), require input of external information to keep them from collapsing in on themselves. This problem applies especially to large language models (LLMs) and their most advanced current incarnation, ChatGPT-4. I’m not talking here about the role of human agency in creating LLMs, which no one disputes. I’m not even talking here about all the humanly generated data that these neural networks ingest or all the subsequent training of these systems by humans. What I’m talking about here is that once all this work is done, these systems cannot simply be set loose and thrive on their own. They need continual propping up from our human intelligence. For LLMs, we are the oracles that make and continue to make them work. 

The Death Knell for AGI

The need for ongoing human intervention in these systems may seem counterintuitive. It is also the death knell for AGI. Because if AGI is to succeed, it must surpass human intelligence, which means it must be able to leave us behind in the dust, learning and growing on its own, thriving and basking in its own marvelous capabilities. Like Aristotle’s unmoved mover God, who does not think about humanity or anything other than himself because it is in the nature of God only to think about the highest thing, and the highest thing of all is God. Thus, the Aristotelian God spends all his time contemplating only himself. A full-fledged AGI would do likewise, not deigning to occupy itself with lesser matters. (As an aside, AGI believers might take comfort in an AGI being so self-absorbed that it would not bother to destroy humanity. But to the degree that flesh-and-blood humans are a threat, or even merely an annoyance, to an AGI, it may be motivated to kill us all so as not to be distracted from contemplating itself!)

Unlike the Aristotelian God, LLMs do not thrive without human oracles continually feeding them novel information. There are sound mathematical reasons for this. The neural networks that are the basis for LLMs reside in finite dimensional vector subspaces. Everything in these spaces can therefore be expressed as a linear combination of finitely many basis vectors. In fact, they are simplexes and the linear combinations are convex, implying convergence to a center of mass, a point of mediocrity. When neural networks output anything, they are thus outputting what’s inherent in these predetermined subspaces. In consequence, they can’t output anything fundamentally new. Worse yet, as they populate their memory with their own productions and thereafter try to learn by teaching themselves, they essentially engage in an act of self-cannibalism. In the end, these systems go bankrupt because intelligence by its nature requires novel insights and creativity, which is to say, an oracle. 

Research backs up this claim that LLMs run aground in the absence of oracular intervention, and specifically external information added by humans. This becomes clear from the abstract of a recent article titled “The Curse of Recursion: Training on Generated Data Makes Models Forget“:

GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks… What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

Think of It This Way

LLMs like ChatGPT are limited by a fixed finite number of dimensions, but the creativity needed to make these artificial intelligence models thrive requires added dimensions. Creativity is always orthogonal to the status quo, and orthogonality, by being at right angles with the status quo, always adds new dimensions. Oracles add such creativity. Without oracles, artificial intelligence systems become solipsistic, turning in on themselves, rehashing only what is in them already, and eventually going bankrupt because they cannot supply the daily bread needed to sustain them. AGI’s oracle problem is therefore real and damning. 

But if AGI faces an oracle problem, don’t humans likewise face an oracle problem? Suppose AGIs require human oracles to thrive. Yet if oracles are so important for creativity, don’t humans need access to oracles as well? But how, asks the computational reductionist, does the external information needed for human intelligence to thrive get to us and into us? A purely mechanistic world is a solipsistic world with all its information internal and self-generated. On mechanistic principles, there’s no way for humans to have access to such oracles.

But why think that the world is mechanistic? Organisms, as we’ve seen, give no signs of being mechanisms. And physics allows for an informationally porous universe. Quantum indeterminacy, for instance, cannot rule out the input of information from transcendent sources. The simplest metaphor for understanding what’s at stake is the radio. If we listen to a symphony broadcast on the radio, we don’t think that the radio is generating the music we hear. Instead, the radio is a conduit for the music from another source. Humans are such conduits. And machines need to be such conduits (for ongoing human intelligent input) if they are to have any real value to us. 

Saturday, 3 February 2024

Scientists' ruminations on ruminants' EQ

 Researchers: Goats Can Read Basic Human Emotions


Readers may wonder at first whether this research was worth doing, but hang on. It turns out that goats can understand basic human emotions by voice alone, according to University of Hong Kong research, co-led by Prof Alan McElligott at City University of Hong Kong and Dr. Marianne Mason of London’s University of Roehampton:

In the experiment, goats listened to a series of voice playbacks expressing either a positive (happy) or a negative (angry) valence during the habituation phase, i.e., when the goat becomes accustomed to the human voice and valence, so they would respond less as the phase progressed. The recording was then switched from a positive to a negative valence (or vice versa) before being reversed.

“We predicted that if goats could discriminate emotional content conveyed in the human voice, they would dishabituate, looking faster and for longer towards the source of the sound, following the first shift in valence,” said Dr. Marianne Mason, University of Roehampton, UK.

MICHAEL GIBB, CITY UNIVERSITY OF HONG KONG, “RESEARCH SHOWS GOATS CAN TELL IF YOU ARE HAPPY OR ANGRY BY YOUR VOICE ALONE,” PHYS.ORG. THE PAPER IS OPEN ACCESS

When the emotional valence changed, 75 percent of the goats looked at the speaker for a longer time. That suggested that the goats had indeed sensed a change in emotional content.

Dogs, Horses, Livestock

Dogs and horses are well known to be sensitive to human emotions but, it can be argued, that is why humans form close relationships with them. What about livestock — animal species that we work with, and maybe live with, but are less likely to bond with? If they also can sense human emotions, that fact should be factored into their care, the researchers argue:

 the results are essential for adding to our understanding of animal behaviour, welfare and emotional experiences, especially since goats and other livestock will hear the human voice in their daily lives. Negatively valenced voices, like angry ones, may cause fear in animals. In contrast, positive ones may be perceived as calming and may even encourage animals to approach and help with human-animal bonding. 

CITY UNIVERSITY OF HONG KONG, “BY YOUR VOICE ALONE”

Reason and Moral Choice

It shouldn’t be very surprising if a wide range of animals can understand the most basic human emotional states, like contentedness vs. anger or maybe fear. After all, those are precisely the elements of the mind that we all share. What animals don’t understand are reason and moral choice, the elements we don’t share. The goat may sense that the human is angry but she does not think “I wonder if he is angry because the price of feed has gone up?” or “It’s not morally right for him to go around shouting at everyone like that! It’s not our fault if the feed price went up!” She responds to simple emotion because that is what she understands. In the same way, humans can understand, and even study, the animal emotions we share.

Interestingly, this distinction plays a role in arguments about the immortality of the soul. As philosopher Edward Feser writes, “ … it is because human beings are rational animals that our souls can survive the deaths of our bodies, since … rational or intellectual powers are essentially incorporeal.” The underlying assumption is that abstractions, ideas, and moral principles are immaterial (incorporeal); thus the aspect of our minds that apprehends them must be too. The basic emotions that we share with animals are, on that view, more rooted in physical nature.

One outcome of this view, of course, is that, as Christof Koch has complained, it meant that no dogs, including his beloved Purzel, go to heaven. However, C. S. Lewis had thought of a possible way around that problem. For more on that story, see “Do Any Dogs Go to Heaven? If So, Why?”

Ps. With all due respect to Mr. Feser,his comments perfectly illustrate how false religious ideas like reductive spiritualism short circuit clear thinking . Obviously emotions like joy,and anger are as immaterial as our moral sense. So to claim that the ability to experience/comprehend emotion is not as much evidence of a reductive spirit soul(if indeed there is such a thing) as possessing a moral sense is arbitrary. Another consideration is that at the beginning of our lives our moral sense is probably on the same level as a dog's or a cat's, is this evidence that this supposed reductive spirit soul is absent at that stage of development.

Friday, 21 July 2023

The brain :Jack of all trades

 How Can a Woman Missing Her Olfactory Bulbs Still Smell?


Even since neuroscientists started imaging the brain, they’ve been turning up cases where people are missing brain parts we would expect them to need in order to do something — but they are doing that very thing anyway. One example, written up in Live Science in 2019, concerns women who are missing their olfactory bulbs but can still smell.

Researchers have discovered a small group of people that seem to defy medical science: They can smell despite lacking “olfactory bulbs,” the region in the front of the brain that processes information about smells from the nose. It’s not clear how they are able to do this, but the findings suggest that the human brain may have a greater ability to adapt than previously thought.

YASEMIN SAPLAKOGLU, “WOMEN MISSING BRAIN’S OLFACTORY BULBS CAN STILL SMELL, PUZZLING SCIENTISTS,” LIVESCIENCE,NOVEMBER 6, 2019. THE PAPER IN NEURON IS OPEN ACCESS.

All the More Remarkable

The story is all the more remarkable when we consider that her sense of smell was especially good; that was why she had signed up for the Israeli researchers’ study. Deciding to pursue the matter, the researchers tested other women. On the ninth try, they found another left-handed woman who could smell without an olfactory bulb.

A researcher who was not involved in the study, Joel Mainland of the Monell Chemical Senses Center in Philadelphia, was asked for comment:

The findings are “pretty counter to most of what the field thinks,” Mainland told Live Science. “I think it’s pretty critical that we figure out what’s happening.”

Yes. But that could take a while because there are a number of similar situations out there.

Last year, Medical Express reported on a woman who lacked a left temporal lobe, believed to be the language area of the brain:

EG told Fedorenko and her team that she only came to realize she had an unusual brain by accident—her brain was scanned in 1987 for an unrelated reason. Prior to the scan she had no idea she was different. By all accounts she behaved normally and had even earned an advanced degree. She also excelled in languages — she speaks fluent Russian — which is all the more surprising considering the left temporal lobe is the part of the brain most often associated with language processing.

Eager to learn more about the woman and her brain, the researchers accepted her into a study that involved capturing images of her brain using an fMRI machine while she was engaged in various activities, such as language processing and math. In so doing, they found no evidence of language processing happening in the left part of her brain; it was all happening in the right. They found that it was likely the woman had lost her left temporal lobe as a child, probably due to a stroke. The area where it had been had become filled with cerebrospinal fluid. To compensate, her brain had developed a language network in the right side of her brain that allowed her to communicate normally. The researchers also learned that EG had a sister who was missing her right temporal lobe, and who also had no symptoms of brain dysfunction — an indication, the researchers suggest, that there is a genetic component to the stroke and recovery process in the two women.

BOB YIRKA, “WOMAN WITH NO LEFT TEMPORAL LOBE DEVELOPED A LANGUAGE NETWORK IN THE RIGHT SIDE OF HER BRAIN,” MEDICAL XPRESS,APRIL 14, 2022 THE PAPER IS OPEN ACCESS.

It’s also come out that one in 4000 people lacks a corpus callosum. That’s the structure of neural fibers that transfers information between the brain’s two hemispheres. It would seem a pretty important part pf the brain yet 25 percent of those who lack it show no symptoms. The others suffer mild to severe cognitive disorders. But we may well wonder how people manage in this situation at all:

In a study published in the journal Cerebral Cortex, neuroscientists from the University of Geneva (UNIGE) discovered that when the neuronal fibres that act as a bridge between the hemispheres are missing, the brain reorganises itself and creates an impressive number of connections inside each hemisphere. These create more intra-hemispheric connections than in a healthy brain, indicating that plasticity mechanisms are involved. It is thought that these mechanisms enable the brain to compensate for the losses by recreating connections to other brain regions using alternative neural pathways.

UNIVERSITÉ DE GENÈVE, “A MALFORMATION ILLUSTRATES THE INCREDIBLE PLASTICITY OF THE BRAIN,” SCIENCEDAILY, OCTOBER 30, 2020. THE PAPER IS OPEN ACCESS.

Prior to Brain Imaging

Recall that, prior to brain imaging, so long as a person was functioning normally, no one had any reason to suppose that a key brain part might simply be missing. And, let’s say its absence was discovered at autopsy. Who is to say that the absence of that part didn’t play some role in bringing about the person’s death? So it was only in recent decades that researchers discovered people of normal abilities with absent brain parts. That’s probably why we hear expressions like “seem to defy medical science” and “incredible plasticity” from the science media now.

Neuroplasticity is perhaps best understood as the human mind reaching out past physical gaps and barriers in any number of inventive ways. And it raises a question: If the mind is merely what the brain does, as many materialist pundits claim, what is the mind when the brain … doesn’t? At times, the mind appears to be picking up where the brain left off. 

Michael Egnor and I are looking forward to tackling topics like that in The Human Soul (Worthy, 2025).

Sunday, 9 July 2023

Mathematics: mother or daughter of creativity?

Is Mathematics Discovered or Invented?


Some think math is invented. (See an article by Peter Biles.) Evidence, though, points towards discovery. Simultaneous mathematical discovery supports this viewpoint. Many mathematical breakthroughs are sometimes independently reported by two or more mathematicians at roughly the same time. The most famous is the simultaneous discovery of calculus by Isaac Newton and Gottfried Wilhelm Leibniz. Newton was secretive about his discovery and shared his results with only a few members of the Royal Society. When Leibnitz published his discovery of the calculus, Newton charged him with Plagiarism. Today, historians agree that the discoveries were independent of each other.

Some Other Examples

Here are some other lesser-known examples of simultaneous discovery.

The Papoulis-Gerchberg Algorithm (PGA). The PGA is an ingenious method for recovering lost sections of functions that are bandlimited. (I describe the PGA in detail in my Handbook of Fourier Analysis.) The PGA was first reported by Athanasios Papoulis1 but was first published in an archival journal, independently, by Gerchberg2. The discoveries occurred independently of each other.

The Karhunen–Loève Theorem, independently discovered by Kari Karhunen3 and Michel Loève4, showed that certain random processes could be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series.  

Non-Euclidean Geometry. Euclid published Elements circa 300 BC. His work wonderfully established Euclidean geometry. It was only in the first half of the 19th century that three men — J´anos Bolyai, Carl Friedrich Gauss, and Nikolai Lobachevsky, independently discovered non-Euclidean geometry. Jenkovszky et al.5 note: “The striking coincidence of independent discoveries… after more than two thousand years of stagnation, may seem almost miraculous.”

Space-Variant Processing. Here’s a personal example. During my graduate work, I developed a method for performing general space-variant processing. My advisor, John F. Walkup, found out that the same method was simultaneously discovered at Stanford by his PhD advisor’s research group. Rather than competing, we agreed to publish all of our findings in the same issue of the journal Applied Optics.6-7

Einstein’s Shoulders

In the context of the argument for discovery, some inventions can curiously be considered discovered rather than invented. Isaac Newton famously said that “if I have seen further [than others], it is by standing on the shoulders of giants.” Einstein built on Newton’s discoveries in classic physics and, in turn, stood on Newton’s shoulders with the formulation of relativity. Modern physicists stand on Einstein’s shoulders. The advancement in technology can likewise be considered standing on an ever-increasing stack of shoulders. This is certainly the case in artificial intelligence. Rosenblatt and Widrow’s early work on AI led to discovery of error backpropagation neural network training that led to deep convolution neural networks, deep learning, and the generative AI we use today.

Inventions can be discovered. An example of an invention being discovered by two men is the telephone. Alexander Graham Bell is credited with inventing the telephone. But according to the Library of Congress:

Elisha Gray, a professor at Oberlin College, applied for a caveat of the telephone on the same day Bell applied for his patent of the telephone … Bell’s lawyer got to the patent office first. The date was February 14, 1876. He was the fifth entry of that day, while Gray’s lawyer was 39th. Therefore, the U.S. Patent Office awarded Bell with the first patent for a telephone, US Patent Number 174,465 rather than honor Gray’s caveat.

If true, both Gray and Bell were standing on the shoulders of those who proposed the telegraph and glimpsed the possibility of the telephone.

Philosophers might contemplate the similarity of the discovery of invention with the debate between predestination and free will. If inventions and advancements in mathematics are discovered, the future is, in a sense, predestined by our discoveries. The pros and cons of the debate will continue well beyond the arguments presented here.

References

A. Papoulis. A new method of image restoration. Joint Services Technical Activity Report, 39, 1973–74
R.W. Gerchberg. Super-resolution through error energy reduction. Optica Acta, Vol. 21, pp. 709–720, 1974.
Kari Karhunen ‘Zur Spektraltheorie Stochastischer Prozesse’, Ann. Acad. Sci. Fennicae, (1946), 37
Michel Loève ‘Probability Theory’, Princeton, N.J.: VanNostrand, 1955
László Jenkovszky, Matthew J. Lake, and Vladimir Soloviev. “János Bolyai, Carl Friedrich Gauss, Nikolai Lobachevsky and the New Geometry: Foreword.” Symmetry 15, no. 3 (2023): 707.
R.J. Marks II, J.F. Walkup, M.O. Hagler and T.F. Krile “Space-variant processing of one-dimensional signals,” Applied Optics, vol. 16, pp.739-745 (1977).
Joseph W. Goodman, Peter Kellman, and E. W. Hansen. “Linear space-variant optical processing of 1-D signals.” Applied Optics 16, no. 3 (1977): 733-738.

Tuesday, 27 June 2023

We are free to acknowledge free moral agency

 Free Will: What Are the Reasons to Believe in It?


University of Missouri psychology professor Kennon Sheldon’s message is neatly summed up in an opening statement: “Regardless of whether humans do or don’t have free will, psychological research shows it’s beneficial to act as if you do.”

The author of Freely Determined: What the New Psychology of the Self Teaches Us About How to Live (Basic Books, 2022) responds to philosophers who say that we do not have free will:

All my life, I’ve struggled with the question of whether humans have ‘free will’. It catalysed my decision to become a psychologist and continues to inspire my research to this day, especially as it relates to the kinds of goals people set for themselves, and the effects of goal-striving on people’s happiness and wellbeing.

I’ve come to the conclusion that people really do have free will, at least when it is defined as the ability to make reasoned choices among action possibilities that we ourselves think up…

Regardless of who is correct in this debate, my work has led me to a second conclusion that I consider even more important than whether we have free will or not. It’s that a belief in our own capacity to make choices is critical for our mental health. At the very least, this belief lets us function ‘as if’ we have free will, which greatly benefits us.

KENNON SHELDON, “THE THREE REASONS WHY IT’S GOOD FOR YOU TO BELIEVE IN FREE WILL,” PSYCHE, JUNE 15, 2023 

An Obvious Problem

Now, the obvious problem with his approach is that if we believe in free will simply because that belief is supposed to be good for our mental health, then we really don’t believe in it.

A simple example suffices: We sometimes hear that being optimistic is also better for mental health. In one sense, that’s true. If we focus on the positive things, our lives feel more pleasant and that is bound to be better for mental health. But what if we have no good reason for optimism? What if we live under an active volcano that shows signs of erupting? Optimism (“it probably won’t really happen this year”) could delay evacuation past the point of no return.

So let’s look back at free will in this light: If we believe that we have it — and that belief is true — we are empowered to deal with temptations and addictions, firm in the knowledge that we really can cast the deciding vote for our best possible outcome. But if free will is not true, we are setting ourselves up for delusion if we succeed and needless disappointment and misery if we fail. Not only that but we are participating in an unfair system where people are judged and punished for unwise or bad behavior that they cannot really help. So functioning “as if” we have free will turns out not to be very good at all.

“A Better Person”?

Sheldon goes on to say,

The second reason why I consider belief in free will to be beneficial is that it makes you a better person. Studies in social psychology show clearly that, if people become convinced that they have no free will, there can be negative effects on their ethical behaviour.

SHELDON, PSYCHE, 2023

Perhaps that’s true but it amounts to saying that perhaps we should be deluded for our own good. Even though delusions are said to be bad for us… Is there any light at the end of this tunnel? 

Sheldon offers a reason why some thinkers deny free will:

You might wonder why anyone would choose to believe in determinism, given the clear negative effects of this belief? There are several possible reasons. Some people might think that determinism is the most scientific and intellectually sophisticated position to take, and they like feeling smarter than others.

SHELDON, PSYCHE, 2023

Well, if science matters, the good news is that neuroscience provides sound reasons to believe in free will. As Stony Brook neurosurgeon Michael Egnor has pointed out, the work of neuroscience pioneer Benjamin Libet established that we certainly have “free won’t” — the ability to choose not to do something:

[W]hat he found was, when you made a decision to push the button [in a psychological experiment], you still had the brain wave that preceded the decision by half a second. But when you decided to veto pushing the button, there was no new brain wave at all. It was silent in terms of brain waves. But you did make the decision to veto. So he said that it wasn’t so much that you have free will but you have free won’t. That is, you have the ability to decide whether or not you are going to comply with what your brain is urging you to do. And that compliance is not material. It’s not a brain wave. It’s immaterial.

MICHAEL EGNOR, “HOW A NEUROSCIENTIST IMAGED FREE WILL (AND “FREE WON’T”),” MIND MATTERS NEWS, MARCH 19, 2020 

What Quantum Mechanics Shows

Physicist Marcelo Gleiser also notes that science does not really support the view that free will is an illusion: “[T]he mind is not a solar system with strict deterministic laws. We have no clue what kinds of laws it follows, apart from very simplistic empirical laws about nerve impulses and their propagation, which already reveal complex nonlinear dynamics.” In any event, quantum mechanics shows that nature is indeterminate at the fundamental level and that the observer’s decision of what to measure plays a role in what happens. One outcome is that a number of younger thinkers accept free will as consistent with the evidence.

In other words, we can accept free will based on the evidence. There is no particular need to think that it might be a possibly pleasant delusion.