Search This Blog

Sunday, 26 May 2019

An academic pans logic.Explains a lot.

Why Human Reason Didn’t “Evolve”

 David Klinghoffer | @d_klinghoffer

 The indispensable neuroscientist Michael Egnor, writing at Mind Matters, applies some needed intellectual rigor in disciplining atheist philosopher Justin E.H. Smith. In a new book, Irrationality: A History of the Dark Side of Reason, Professor Smith “argues that reason is inferior to the non-rational behavior of animals.” Egnor: “Think of the irony: a professor of philosophy, who is paid only to reason, uses reason to argue against reason. Welcome to the bowels of atheist metaphysics.”

According to Smith, writing at Aeon, “Philosophers and cognitive scientists today generally comprehend the domain of reason as a certain power of making inferences, confined to the thoughts and actions of human beings alone.” Well, not exactly. Reason, as Dr. Egnor reminds him, is nothing  more or less than the ability to think abstractly, as opposed to thinking about physical objects. 

A Clever Dog

An example of the latter would be thinking about how to get out of your doggie play pen. Our family is currently testing out a puppy (long story) who has surprised me repeatedly when I wasn’t looking by somehow escaping from a closed play pen. It took me a while to figure out how he was doing it, since the pen is about twice his height. Even then I had to catch him in the act. Clever dog! But he wasn’t engaging in the activity of reasoning by, let’s say, contemplating the idea of freedom. Egnor:
Only man thinks abstractly; that is the ability to reason. No animal, no matter how clever, can think abstractly or reason. Animals can be very clever but their cleverness is always about concrete things — about the bone they are playing with, or about the stranger they are barking at. They don’t think about “play” or “threat” as abstract concepts.

Tough to Explain

Smith says, “Like echolocation in bats or photosynthesis in plants, reason is an evolved power.” But if reason is unique to humans, and exists nowhere else in life, did it indeed “evolve” from lower animals? That’s a tough question. In fact, Egnor explains why it didn’t, at least not by any purely material mechanism like the Darwinian one:
Reason is a power characteristic of man, to be sure, but it is not “an evolved power.” It didn’t “evolve.” The ability to reason didn’t evolve because it’s not a material power of the mind. Reason is an immaterial power of the mind — it is abstracted from particular things, and cannot logically be produced by a material thing.
A purely material process can’t “evolve” a spiritual power. Egnor is, as I said, indispensable. Read the read as Mind Matters.

Why some can't help but disagree with determinists.

Physicist tells people to stop saying they have free will
January 13, 2016 Posted by vjtorley under Intelligent Design

Over at her BackReAction blog, Dr. Sabine Hossenfelder, a theoretical physicist based at the Frankfurt Institute for Advanced Studies in Frankfurt, Germany, has written an article titled, Free will is dead, let’s bury it. However, her arguments against free will are both scientifically unsound and philosophically dated. She writes:

There are only two types of fundamental laws that appear in contemporary theories. One type is deterministic, which means that the past entirely predicts the future. There is no free will in such a fundamental law because there is no freedom. The other type of law we know appears in quantum mechanics and has an indeterministic component which is random. This randomness cannot be influenced by anything, and in particular it cannot be influenced by you, whatever you think “you” are. There is no free will in such a fundamental law because there is no “will” – there is just some randomness sprinkled over the determinism.

In neither case do you have free will in any meaningful way.

These are the only two options, and all other elaborations on the matter are just verbose distractions. It doesn’t matter if you start talking about chaos (which is deterministic), top-down causation (which doesn’t exist), or insist that we don’t know how consciousness really works (true but irrelevant). It doesn’t change a thing about this very basic observation: there isn’t any known law of nature that lets you meaningfully speak of “free will”.

If you don’t want to believe that, I challenge you to write down any equation for any system that allows for something one could reasonably call free will…

The only known example for a law that is neither deterministic nor random comes from myself. But it’s a baroque construct meant as proof in principle, not a realistic model that I would know how to combine with the four fundamental interactions.

First of all, let’s get the bad science out of the way. Dr. Hossenfelder insists that top-down causation “doesn’t exist.” She should try telling that to George Ellis, an eminent cosmologist at the University of Cape Town, whose scientific work is described by Adam Frank in an article titled, How Does The World Work: Top-Down or Bottom-Up?, over at NPR’s 13.7: Cosmos And Culture blog:

In an essay for the physics website FQXi, Ellis argues for top-down causation. If he’s right, then much of our thinking about what matters would have to be revised.

To get an handle on how top-down causation works, Ellis focuses on what’s in front of all us so much of the time: the computer. Computers are structured systems. They are built as a hierarchy of layers, extending from the wires in the transistors all the way up to the fully assembled machine, gleaming metal case and all.

Because of this layering, what happens at the uppermost levels — like you hitting the escape key — flows downward. This action determines the behavior of the lowest levels — like the flow of electrons through the wires — in ways that simply could not be predicted by just knowing the laws of electrons…

But the hardware, of course, is just one piece of the puzzle. This is where things get interesting. As Ellis explains:

Hardware is only causally effective because of the software which animates it: by itself hardware can do nothing. Both hardware and software are hierarchically structured with the higher level logic driving the lower level events.
In other words, it’s software at the top level of structure that determines how the electrons at the bottom level flow. Hitting escape while running Word moves the electrons in the wires in different ways than hitting escape does when running Photoshop. This is causation flowing from top to bottom…

Ellis is arguing for a kind of emergentism whereby new entities and new rules emerge with new levels of structure in the universe (an argument also made by Stuart Kaufmann here at 13.7). But Ellis is going further. He is arguing that the top is always exerting an influence on the bottom.

Ellis concludes his FQXI esssay, Recognising Top-Down Causation, with a bold hypothesis, and he puts forward some concrete research proposals which he believes will fundamentally transform the nature of science (bold emphases mine – VJT):

Hypothesis: bottom up emergence by itself is strictly limited in terms of the complexity it can give rise to. Emergence of genuine complexity is characterised by a reversal of information flow from bottom up to top down [27].

The degree of complexity that can arise by bottom-up causation alone is strictly limited. Sand piles, the game of life, bird flocks, or any dynamics governed by a local rule [28] do not compare in complexity with a single cell or an animal body. The same is true in physics: spontaneously broken symmetry is powerful [16], but not as powerful as symmetry breaking that is guided top-down to create ordered structures (such as brains and computers). Some kind of coordination of effects is needed for such complexity to emerge.

The assumption that causation is bottom up only is wrong in biology, in computers, and even in many cases in physics, for example state vector preparation, where top-down constraints allow non-unitary behaviour at the lower levels. It may well play a key role in the quantum measurement problem (the dual of state vector preparation) [5]. One can bear in mind here that wherever equivalence classes of entities play a key role, such as in Crutchfield’s computational mechanics [29], this is an indication that top-down causation is at play.

There are some great discussions of the nature of emergent phenomena in physics [17,1,12,30], but
none of them specifically mention the issue of top down causation. This paper proposes that
recognising this feature will make it easier to comprehend the physical effects underlying emergence of genuine complexity, and may lead to useful new developments, particularly to do with the foundational nature of quantum theory. It is a key missing element in current physics.

Having disposed of Dr. Hossenfelder’s claim that top-down causation doesn’t exist, I’d now like to examine her philosophical argument. Briefly, the argument she puts forward is a variant of what Bob Doyle describes as The Standard Argument against Free Will, at his Information Philosopher Website:

First, if determinism is the case, the will is not free. We call this the Determinism Objection.

Second, if indeterminism and real chance exist, our will would not be in our control, we could not be responsible for random actions. We call this the Randomness Objection.

Together, these objections can be combined in the Responsibility Objection, namely that no Free Will model has yet provided us an intelligible account of the agent control needed for moral responsibility.

Both parts are logically and practically flawed, partly from abuse of language that led some 20th-century philosophers to call free will a “pseudo-problem,” and partly from claims to knowledge that are based on faulty evidence.

So why does Doyle think the Standard Argument Against Free Will is flawed? First, Doyle argues that determinism is empirically false: quantum physics introduces a significant degree of indeterminism into the world, even at the macro level. Second, Doyle maintains that chance would undercut freedom only if it were a cause of our actions, but in his preferred two-stage Cogito model, it’s not: chance serves to dish up a random array of possibilities, and then something called “MacroMind” takes over, and makes a decision, based on the individual’s past history of preferences. However, I don’t propose to discuss Doyle’s Cogito model further here, because Doyle himself rejects the strong libertarian view (which I support), that “a decision is directly free only if, until it is made, the agent is able to do other than make that decision, where this is taken to require that, until the action occurs, there is a chance that it will not occur.” Indeed, Doyle explicitly acknowledges that in his Cogito model, during the split-second before a choice is made, “the decision could be reliably (though not perfectly) predicted by a super-psychiatrist who knew everything about the agent and was aware of all the alternative possibilities.” To my mind, that’s a Pickwickian account of freedom.

For her part, Dr. Hossenfelder vehemently disagrees with Doyle’s contention that quantum physics entails indeterminism: she points out that superdeterminism (the hypothesis that the quantum measurements that experimenters choose to make are themselves predetermined by the laws of physics) is an equally consistent possibility – and, I might add, one that was acknowledged by John Bell himself: see The Ghost in the Atom: A Discussion of the Mysteries of Quantum Physics, by Paul C. W. Davies and Julian R. Brown, 1986/1993, pp. 45-47). For my part, I think physicist Anton Zeilinger identified the flaw in superdeterminism when he perceptively commented:

[W]e always implicitly assume the freedom of the experimentalist… This fundamental assumption is essential to doing science. If this were not true, then, I suggest, it would make no sense at all to ask nature questions in an experiment, since then nature could determine what our questions are, and that could guide our questions such that we arrive at a false picture of nature. (Dance of the Photons, Farrar, Straus and Giroux, New York, 2010, p. 266.)

Additionally, I should point out that even on a classical, Newtonian worldview, determinism does not follow. Newtonian mechanics is popularly believed to imply determinism, but this belief was exploded over two decades ago by John Earman (A Primer on Determinism, 1986, Dordrecht: Reidel, chapter III). In 2006, Dr. John Norton put forward a simple illustration which is designed to show that violations of determinism can arise very easily in a system governed by Newtonian physics (The Dome: An Unexpectedly Simple Failure of Determinism. 2006 Philosophy of Science Association 20th Biennial Meeting (Vancouver), PSA 2006 Symposia.) In Norton’s example, a mass sits on a dome in a gravitational field. After remaining unchanged for an arbitrary time, it spontaneously moves in an arbitrary direction. The mass’s indeterministic motion is clearly compatible with Newtonian mechanics. Norton describes his example as an exceptional case of indeterminism arising in a Newtonian system with a finite number of degrees of freedom. (On the other hand, indeterminism is generic for Newtonian systems with infinitely many degrees of freedom.)

Sometimes the Principle of Least Action is said to imply determinism. But since the wording of the principle shows that it only applies to systems in which total mechanical energy (kinetic energy plus potential energy) is conserved, and as it deals with the trajectory of particles in motion, I fail to see how it would apply to collisions between particles, in which mechanical energy is not necessarily conserved. At best, it seems that the universe is fully deterministic only if particles behave like perfectly elastic billiard balls – which is only true in an artificially simplified version of the cosmos.

Finally, I should like to add that in my own top-down model of free will (as with Doyle’s), chance does not figure as a cause of our choices. In my model, it serves as the matrix upon which non-random, but undetermined, free choices are imposed, via a form of top-down causation. Here’s how I described it in a 2012 post titled, Is free will dead?:

…[I]t is easy to show that a non-deterministic system may be subject to specific constraints, while still remaining random. These constraints may be imposed externally, or alternatively, they may be imposed from above, as in top-down causation. To see how this might work, suppose that my brain performs the high-level act of making a choice, and that this act imposes a constraint on the quantum micro-states of tiny particles in my brain. This doesn’t violate quantum randomness, because a selection can be non-random at the macro level, but random at the micro level. The following two rows of digits will serve to illustrate my point.

1 0 0 0 1 1 1 1 0 0 0 1 0 1 0 0 1 1
0 0 1 0 0 0 0 1 1 0 1 1 0 1 1 1 0 1

The above two rows of digits were created by a random number generator. The digits in some of these columns add up to 0; some add up to 1; and some add up to 2.

Now suppose that I impose the non-random macro requirement: keep the columns whose sum equals 1, and discard the rest. I now have:

1 0 1 1 1 0 0 0 0 0 1
0 1 0 0 0 1 1 0 1 1 0

Each row is still random (at the micro level), but I have now imposed a non-random macro-level constraint on the system as a whole (at the macro level). That, I would suggest, what happens when I make a choice.

Top-down causation and free will

What I am proposing, in brief, is that top-down (macro–>micro) causation is real and fundamental (i.e. irreducible to lower-level acts). For if causation is always bottom-up (micro–>macro) and never top-down, or alternatively, if top-down causation is real, but only happens because it has already been determined by some preceding occurrence of bottom-up causation, then our actions are simply the product of our body chemistry – in which case they are not free, since they are determined by external circumstances which lie beyond our control. But if top-down causation is real and fundamental, then a person’s free choices, which are macroscopic events that occur in the brain at the highest level, can constrain events in the brain occurring at a lower, sub-microscopic level, and these constraints then can give rise to neuro-muscular movements, which occur in accordance with that person’s will. (For instance, in the case I discussed above, relating to rows of ones and zeroes, the requirement that the columns must add up to 1 might result in to the neuro-muscular act of raising my left arm, while the requirement that they add up to 2 might result in the act of raising my right arm.)

As far as I’m aware, there’s no law of physics which rules out the idea that patterns of random events (such as radioactive decay events) might turn out to be non-random at the macro level, while remaining random at the micro- or individual level. Of course, as far ass we know, radioactive decay events are random at the macro level, but the same might not be true of random neuronal firings in the brain: perhaps here, scientists might find non-random large-scale patterns co-existing with randomness at the micro- level.

Putting it another way: even if laboratory experiments repeatedly failed to detect patterns in a random sequence of events over the course of time, this would not establish that these events are randomly distrubuted across space as well. Hence large-scale spatial patterns are perfectly compatible with temporal randomness, at the local level.

In her blog post, Dr. Hossenfelder challenges her readers to “write down any equation for any system that allows for something one could reasonably call free will.” It’s a challenge she has already met, with her free will function in part 3 of her ARXIV essay, in which “each decision comes down to choosing one from ten alternatives described by the digits 0 to 9”:

Here is an example: Consider an algorithm that computes some transcendental
number, τ, unknown to you. Denote with τn the n-th digit of the number after the
decimal point. This creates an infinitely long string of digits. Let tN be a time very
far to the future, and let F be the function that returns τN−i for the choice the agent
makes at time ti.

This has the following consequence: The time evolution of the agent’s state is
now no longer random. It is determined by F, but not (forward) deterministic: No
matter how long you record the agent’s choices, you will never be able to predict,
not even in principle, what the next choice will be…

Dr. Hossenfelder’s example is mathematically elegant, but it’s really not necessary, in order to salvage the idea of free will. All that is needed is to show that the the idea of top-down causation which is irreducible to preceding instances of bottom-up causation remains a valid one in physics, and that this, coupled with the suggestion that the mind can make non-random global selections from random sequences of events without destroying their local randomness, is enough to render the ideas of agent-causation and strong libertarian free will scientifically tenable. How the mind makes these selections and influences processes within the brain is a question for another day (for a discussion, see here and here). All that I have been concerned to show here is that no laws of physics are broken, even if one adopts a very robust version of libertarian free-will.