Search This Blog

Sunday 26 May 2019

An academic pans logic.Explains a lot.

Why Human Reason Didn’t “Evolve”

 David Klinghoffer | @d_klinghoffer

 The indispensable neuroscientist Michael Egnor, writing at Mind Matters, applies some needed intellectual rigor in disciplining atheist philosopher Justin E.H. Smith. In a new book, Irrationality: A History of the Dark Side of Reason, Professor Smith “argues that reason is inferior to the non-rational behavior of animals.” Egnor: “Think of the irony: a professor of philosophy, who is paid only to reason, uses reason to argue against reason. Welcome to the bowels of atheist metaphysics.”

According to Smith, writing at Aeon, “Philosophers and cognitive scientists today generally comprehend the domain of reason as a certain power of making inferences, confined to the thoughts and actions of human beings alone.” Well, not exactly. Reason, as Dr. Egnor reminds him, is nothing  more or less than the ability to think abstractly, as opposed to thinking about physical objects. 

A Clever Dog

An example of the latter would be thinking about how to get out of your doggie play pen. Our family is currently testing out a puppy (long story) who has surprised me repeatedly when I wasn’t looking by somehow escaping from a closed play pen. It took me a while to figure out how he was doing it, since the pen is about twice his height. Even then I had to catch him in the act. Clever dog! But he wasn’t engaging in the activity of reasoning by, let’s say, contemplating the idea of freedom. Egnor:
Only man thinks abstractly; that is the ability to reason. No animal, no matter how clever, can think abstractly or reason. Animals can be very clever but their cleverness is always about concrete things — about the bone they are playing with, or about the stranger they are barking at. They don’t think about “play” or “threat” as abstract concepts.

Tough to Explain

Smith says, “Like echolocation in bats or photosynthesis in plants, reason is an evolved power.” But if reason is unique to humans, and exists nowhere else in life, did it indeed “evolve” from lower animals? That’s a tough question. In fact, Egnor explains why it didn’t, at least not by any purely material mechanism like the Darwinian one:
Reason is a power characteristic of man, to be sure, but it is not “an evolved power.” It didn’t “evolve.” The ability to reason didn’t evolve because it’s not a material power of the mind. Reason is an immaterial power of the mind — it is abstracted from particular things, and cannot logically be produced by a material thing.
A purely material process can’t “evolve” a spiritual power. Egnor is, as I said, indispensable. Read the read as Mind Matters.

Why some can't help but disagree with determinists.

Physicist tells people to stop saying they have free will
January 13, 2016 Posted by vjtorley under Intelligent Design

Over at her BackReAction blog, Dr. Sabine Hossenfelder, a theoretical physicist based at the Frankfurt Institute for Advanced Studies in Frankfurt, Germany, has written an article titled, Free will is dead, let’s bury it. However, her arguments against free will are both scientifically unsound and philosophically dated. She writes:

There are only two types of fundamental laws that appear in contemporary theories. One type is deterministic, which means that the past entirely predicts the future. There is no free will in such a fundamental law because there is no freedom. The other type of law we know appears in quantum mechanics and has an indeterministic component which is random. This randomness cannot be influenced by anything, and in particular it cannot be influenced by you, whatever you think “you” are. There is no free will in such a fundamental law because there is no “will” – there is just some randomness sprinkled over the determinism.

In neither case do you have free will in any meaningful way.

These are the only two options, and all other elaborations on the matter are just verbose distractions. It doesn’t matter if you start talking about chaos (which is deterministic), top-down causation (which doesn’t exist), or insist that we don’t know how consciousness really works (true but irrelevant). It doesn’t change a thing about this very basic observation: there isn’t any known law of nature that lets you meaningfully speak of “free will”.

If you don’t want to believe that, I challenge you to write down any equation for any system that allows for something one could reasonably call free will…

The only known example for a law that is neither deterministic nor random comes from myself. But it’s a baroque construct meant as proof in principle, not a realistic model that I would know how to combine with the four fundamental interactions.

First of all, let’s get the bad science out of the way. Dr. Hossenfelder insists that top-down causation “doesn’t exist.” She should try telling that to George Ellis, an eminent cosmologist at the University of Cape Town, whose scientific work is described by Adam Frank in an article titled, How Does The World Work: Top-Down or Bottom-Up?, over at NPR’s 13.7: Cosmos And Culture blog:

In an essay for the physics website FQXi, Ellis argues for top-down causation. If he’s right, then much of our thinking about what matters would have to be revised.

To get an handle on how top-down causation works, Ellis focuses on what’s in front of all us so much of the time: the computer. Computers are structured systems. They are built as a hierarchy of layers, extending from the wires in the transistors all the way up to the fully assembled machine, gleaming metal case and all.

Because of this layering, what happens at the uppermost levels — like you hitting the escape key — flows downward. This action determines the behavior of the lowest levels — like the flow of electrons through the wires — in ways that simply could not be predicted by just knowing the laws of electrons…

But the hardware, of course, is just one piece of the puzzle. This is where things get interesting. As Ellis explains:

Hardware is only causally effective because of the software which animates it: by itself hardware can do nothing. Both hardware and software are hierarchically structured with the higher level logic driving the lower level events.
In other words, it’s software at the top level of structure that determines how the electrons at the bottom level flow. Hitting escape while running Word moves the electrons in the wires in different ways than hitting escape does when running Photoshop. This is causation flowing from top to bottom…

Ellis is arguing for a kind of emergentism whereby new entities and new rules emerge with new levels of structure in the universe (an argument also made by Stuart Kaufmann here at 13.7). But Ellis is going further. He is arguing that the top is always exerting an influence on the bottom.

Ellis concludes his FQXI esssay, Recognising Top-Down Causation, with a bold hypothesis, and he puts forward some concrete research proposals which he believes will fundamentally transform the nature of science (bold emphases mine – VJT):

Hypothesis: bottom up emergence by itself is strictly limited in terms of the complexity it can give rise to. Emergence of genuine complexity is characterised by a reversal of information flow from bottom up to top down [27].

The degree of complexity that can arise by bottom-up causation alone is strictly limited. Sand piles, the game of life, bird flocks, or any dynamics governed by a local rule [28] do not compare in complexity with a single cell or an animal body. The same is true in physics: spontaneously broken symmetry is powerful [16], but not as powerful as symmetry breaking that is guided top-down to create ordered structures (such as brains and computers). Some kind of coordination of effects is needed for such complexity to emerge.

The assumption that causation is bottom up only is wrong in biology, in computers, and even in many cases in physics, for example state vector preparation, where top-down constraints allow non-unitary behaviour at the lower levels. It may well play a key role in the quantum measurement problem (the dual of state vector preparation) [5]. One can bear in mind here that wherever equivalence classes of entities play a key role, such as in Crutchfield’s computational mechanics [29], this is an indication that top-down causation is at play.

There are some great discussions of the nature of emergent phenomena in physics [17,1,12,30], but
none of them specifically mention the issue of top down causation. This paper proposes that
recognising this feature will make it easier to comprehend the physical effects underlying emergence of genuine complexity, and may lead to useful new developments, particularly to do with the foundational nature of quantum theory. It is a key missing element in current physics.

Having disposed of Dr. Hossenfelder’s claim that top-down causation doesn’t exist, I’d now like to examine her philosophical argument. Briefly, the argument she puts forward is a variant of what Bob Doyle describes as The Standard Argument against Free Will, at his Information Philosopher Website:

First, if determinism is the case, the will is not free. We call this the Determinism Objection.

Second, if indeterminism and real chance exist, our will would not be in our control, we could not be responsible for random actions. We call this the Randomness Objection.

Together, these objections can be combined in the Responsibility Objection, namely that no Free Will model has yet provided us an intelligible account of the agent control needed for moral responsibility.

Both parts are logically and practically flawed, partly from abuse of language that led some 20th-century philosophers to call free will a “pseudo-problem,” and partly from claims to knowledge that are based on faulty evidence.

So why does Doyle think the Standard Argument Against Free Will is flawed? First, Doyle argues that determinism is empirically false: quantum physics introduces a significant degree of indeterminism into the world, even at the macro level. Second, Doyle maintains that chance would undercut freedom only if it were a cause of our actions, but in his preferred two-stage Cogito model, it’s not: chance serves to dish up a random array of possibilities, and then something called “MacroMind” takes over, and makes a decision, based on the individual’s past history of preferences. However, I don’t propose to discuss Doyle’s Cogito model further here, because Doyle himself rejects the strong libertarian view (which I support), that “a decision is directly free only if, until it is made, the agent is able to do other than make that decision, where this is taken to require that, until the action occurs, there is a chance that it will not occur.” Indeed, Doyle explicitly acknowledges that in his Cogito model, during the split-second before a choice is made, “the decision could be reliably (though not perfectly) predicted by a super-psychiatrist who knew everything about the agent and was aware of all the alternative possibilities.” To my mind, that’s a Pickwickian account of freedom.

For her part, Dr. Hossenfelder vehemently disagrees with Doyle’s contention that quantum physics entails indeterminism: she points out that superdeterminism (the hypothesis that the quantum measurements that experimenters choose to make are themselves predetermined by the laws of physics) is an equally consistent possibility – and, I might add, one that was acknowledged by John Bell himself: see The Ghost in the Atom: A Discussion of the Mysteries of Quantum Physics, by Paul C. W. Davies and Julian R. Brown, 1986/1993, pp. 45-47). For my part, I think physicist Anton Zeilinger identified the flaw in superdeterminism when he perceptively commented:

[W]e always implicitly assume the freedom of the experimentalist… This fundamental assumption is essential to doing science. If this were not true, then, I suggest, it would make no sense at all to ask nature questions in an experiment, since then nature could determine what our questions are, and that could guide our questions such that we arrive at a false picture of nature. (Dance of the Photons, Farrar, Straus and Giroux, New York, 2010, p. 266.)

Additionally, I should point out that even on a classical, Newtonian worldview, determinism does not follow. Newtonian mechanics is popularly believed to imply determinism, but this belief was exploded over two decades ago by John Earman (A Primer on Determinism, 1986, Dordrecht: Reidel, chapter III). In 2006, Dr. John Norton put forward a simple illustration which is designed to show that violations of determinism can arise very easily in a system governed by Newtonian physics (The Dome: An Unexpectedly Simple Failure of Determinism. 2006 Philosophy of Science Association 20th Biennial Meeting (Vancouver), PSA 2006 Symposia.) In Norton’s example, a mass sits on a dome in a gravitational field. After remaining unchanged for an arbitrary time, it spontaneously moves in an arbitrary direction. The mass’s indeterministic motion is clearly compatible with Newtonian mechanics. Norton describes his example as an exceptional case of indeterminism arising in a Newtonian system with a finite number of degrees of freedom. (On the other hand, indeterminism is generic for Newtonian systems with infinitely many degrees of freedom.)

Sometimes the Principle of Least Action is said to imply determinism. But since the wording of the principle shows that it only applies to systems in which total mechanical energy (kinetic energy plus potential energy) is conserved, and as it deals with the trajectory of particles in motion, I fail to see how it would apply to collisions between particles, in which mechanical energy is not necessarily conserved. At best, it seems that the universe is fully deterministic only if particles behave like perfectly elastic billiard balls – which is only true in an artificially simplified version of the cosmos.

Finally, I should like to add that in my own top-down model of free will (as with Doyle’s), chance does not figure as a cause of our choices. In my model, it serves as the matrix upon which non-random, but undetermined, free choices are imposed, via a form of top-down causation. Here’s how I described it in a 2012 post titled, Is free will dead?:

…[I]t is easy to show that a non-deterministic system may be subject to specific constraints, while still remaining random. These constraints may be imposed externally, or alternatively, they may be imposed from above, as in top-down causation. To see how this might work, suppose that my brain performs the high-level act of making a choice, and that this act imposes a constraint on the quantum micro-states of tiny particles in my brain. This doesn’t violate quantum randomness, because a selection can be non-random at the macro level, but random at the micro level. The following two rows of digits will serve to illustrate my point.

1 0 0 0 1 1 1 1 0 0 0 1 0 1 0 0 1 1
0 0 1 0 0 0 0 1 1 0 1 1 0 1 1 1 0 1

The above two rows of digits were created by a random number generator. The digits in some of these columns add up to 0; some add up to 1; and some add up to 2.

Now suppose that I impose the non-random macro requirement: keep the columns whose sum equals 1, and discard the rest. I now have:

1 0 1 1 1 0 0 0 0 0 1
0 1 0 0 0 1 1 0 1 1 0

Each row is still random (at the micro level), but I have now imposed a non-random macro-level constraint on the system as a whole (at the macro level). That, I would suggest, what happens when I make a choice.

Top-down causation and free will

What I am proposing, in brief, is that top-down (macro–>micro) causation is real and fundamental (i.e. irreducible to lower-level acts). For if causation is always bottom-up (micro–>macro) and never top-down, or alternatively, if top-down causation is real, but only happens because it has already been determined by some preceding occurrence of bottom-up causation, then our actions are simply the product of our body chemistry – in which case they are not free, since they are determined by external circumstances which lie beyond our control. But if top-down causation is real and fundamental, then a person’s free choices, which are macroscopic events that occur in the brain at the highest level, can constrain events in the brain occurring at a lower, sub-microscopic level, and these constraints then can give rise to neuro-muscular movements, which occur in accordance with that person’s will. (For instance, in the case I discussed above, relating to rows of ones and zeroes, the requirement that the columns must add up to 1 might result in to the neuro-muscular act of raising my left arm, while the requirement that they add up to 2 might result in the act of raising my right arm.)

As far as I’m aware, there’s no law of physics which rules out the idea that patterns of random events (such as radioactive decay events) might turn out to be non-random at the macro level, while remaining random at the micro- or individual level. Of course, as far ass we know, radioactive decay events are random at the macro level, but the same might not be true of random neuronal firings in the brain: perhaps here, scientists might find non-random large-scale patterns co-existing with randomness at the micro- level.

Putting it another way: even if laboratory experiments repeatedly failed to detect patterns in a random sequence of events over the course of time, this would not establish that these events are randomly distrubuted across space as well. Hence large-scale spatial patterns are perfectly compatible with temporal randomness, at the local level.

In her blog post, Dr. Hossenfelder challenges her readers to “write down any equation for any system that allows for something one could reasonably call free will.” It’s a challenge she has already met, with her free will function in part 3 of her ARXIV essay, in which “each decision comes down to choosing one from ten alternatives described by the digits 0 to 9”:

Here is an example: Consider an algorithm that computes some transcendental
number, τ, unknown to you. Denote with τn the n-th digit of the number after the
decimal point. This creates an infinitely long string of digits. Let tN be a time very
far to the future, and let F be the function that returns τN−i for the choice the agent
makes at time ti.

This has the following consequence: The time evolution of the agent’s state is
now no longer random. It is determined by F, but not (forward) deterministic: No
matter how long you record the agent’s choices, you will never be able to predict,
not even in principle, what the next choice will be…

Dr. Hossenfelder’s example is mathematically elegant, but it’s really not necessary, in order to salvage the idea of free will. All that is needed is to show that the the idea of top-down causation which is irreducible to preceding instances of bottom-up causation remains a valid one in physics, and that this, coupled with the suggestion that the mind can make non-random global selections from random sequences of events without destroying their local randomness, is enough to render the ideas of agent-causation and strong libertarian free will scientifically tenable. How the mind makes these selections and influences processes within the brain is a question for another day (for a discussion, see here and here). All that I have been concerned to show here is that no laws of physics are broken, even if one adopts a very robust version of libertarian free-will.

Saturday 25 May 2019

Yet more on mindless intelligence.

More on "Intelligent Design Without a Designer"
Brendan Dixon April 3, 2016 3:22 AM

A reader, Eric, writes to ask about my post "Nature 'Learns' to Create Complexity? Think Again":

Like Brendan, I work on developing computer models and software for scientific applications. His comments about models vs. reality were, of course, spot on. It did occur to me that there is one possible partial response to a point that was made. Brendan writes:

Worse, whatever is "learned" by that organism must be passed along to the children. So, unless the paper's authors subscribe to some form of Lamarckism, which is exceedingly unlikely, without a clear inheritance path all the training value is lost.

A neo-Darwinist could respond with the same kind of switch that the synthesis used to rescue the original version of Darwinism. Darwinism originally did have a Lamarckian flavor to it (which is why it opposed and resisted Mendelian genetics). With the neo-Darwinian synthesis, they dropped the Lamarkian aspects, embraced genetics, and proposed that the gains were built in through genetic variations.

In a similar way, someone now could say that the "learning" isn't Lamarkian. Rather, it is the genetic variations to GRNs that succeed in being passed on (in contrast to those that were weeded out). The correspondence with environmental conditions comes in the form of the natural selection filtering that comes after the fact of the genetic change.

Yet, if that is how it should be understood, the issues raised in the preceding paragraph about training "over time and space" are still relevant. The genetic "learning" would not be limited to the life of any single individual, but it still would be constrained to accumulating within a line of ancestry.

That would make the "learning" generational in nature. Instead of a single copy of a network collecting "millions of data sets," one would have to look along an organism's ancestral line to observe the inherited network receiveing a multitude of tweaks down through the successful generations.

At first glimpse, it might seem that this way of viewing it could succeed at making it work for the neo-Darwinist. However, it does come back to the details, such as the limited amount of successful tweaking that can take place in any single generation and the limited number of generations down to the present organism.

Quite simply, there does still seem to be a serious problem about accumulating the extent of learning that would be needed.

The other main thought I had was this. You cannot have a second time without having a first time. You cannot learn to use building blocks more effectively apart from first having building blocks that can be used somewhat effectively.

In my software, I know the value of modularization and being able to recombine components, but that doesn't happen by accident. I wouldn't be able to reuse composable parts if I did not first have such parts that allowed from recombination. Flexibility comes by design, and before one could have a flexible, adjustable, "educated" GRN, one would need first to have a GRN that from the start flexibly allows for "learning." Otherwise, one could not get off the ground.

Even granting for the sake of argument that GRNs can learn across generations, how do we expect that a learning GRN can first come to exist?

Darwinists are forever pushing their present problems aside by building on the idea that whatever they need already existed earlier (without explanation), whether it is functional information or GRNs that can learn.

By the way, if you haven't seen the movie Next, staring Nicholas Cage, I would suggest giving it a look. (It's available for streaming on Netflix.)

It's not a "great" movie in any sense, but it does do a very clever job at cinematically portraying what is essentially a visual allegory to the evolutionary perspective of a finding a winning path that has been filtered from among many other failed attempts. As with evolutionary thought, the attempts are temporally concurrent.

People who have seen this will more easily and more intuitively feel that such a strategy could work (so long as one does not delve into the messy details).

I appreciate Eric's thoughts. I suppose the answer depends on how the paper's authors envision the GRN learning. If it is no different than modification + selection that preserves those GRNs producing "better" phenotypes, then Eric's point is sound. And he is correct in that case to point out that an entire ancestral chain matters. If the GRN learns dynamically (which I don't believe was clear in the original paper), then inheritance remains a significant problem.

However, if Eric's perspective is correct and a GRN "learns" through descent with modification, the analogy with Neural Networks breaks down. Neural Networks succeed by changing the entire network in response to massive data sets (along with some notion of correctness). For example, imagine two critters: The GRN in one "learns" to adapt part of the network while the other adapts the other part. They cannot share that information (unless you can specify a mechanism by which they can mate and successfully blend what both "learned"). And adaptation in one is made without respect for the adaptation in the other. NNs succeed because the entire network adapts as a unit; that is, each node learns how to respond to inputs based on how the entire network has faired. You cannot teach one node at a time or even part of the network at a time -- the entire network responds. Training a NN one node at a time will, by definition, fail.

And, as Eric agrees, the quantity of data to succeed is massive. AlphaGo contained "millions" of connections across 12 (or 13?) layers (networks) in two different NNs. AlphaGo was originally fed roughly 30 million moves from 160,000 games, yet that was far from sufficient. It required more training by playing itself (I failed to find the number of moves evaluated during this phase, but I suspect it completed many more than 160,000 games). And then, to allow the search tree to run deep, they distributed its execution across over 2,000 CPUs (some of which were the specialized, high-performance CPUs used in video games). Even then, a small number of "hand crafted" rules lurked within (see here).

Comparing GRNs to NNs remains uninteresting given what we've learned about training NNs. If anything, what we've learned says that GRNs could not have been self-taught since they just don't have access to the necessary data or a means by which to ensure the entire GRN benefits from the training.

The quantity of data needed to train an NN is proportional (though not linearly) to the number of connections within the network. The more connections a network contains, the more training data necessary.

I do not know how many nodes or connections a GRN contains, but, in a way, it is immaterial: If the number of connections is small, the network cannot "recognize" anything significant. If it is large, then the data requirements will rapidly exceed that available. A small, trainable network will be uninteresting and a large, interesting network will be untrainable. (I like that statement. I believe it captures the challenge succinctly with a bit of rhythm.)


If someone were to respond by saying, "Ah, but we have thousands of GRNs," then they have a large network (where each node is itself a GRN, something akin to a deep learning problem) that will require massive quantities of data.

Patron saint of the master race?

Why Darwinism Can Never Separate Itself from Racism

 David Klinghoffer | @d_klinghoffer

 

It’s great to have insightful colleagues. Denyse O’Leary, with Discovery Institute’s Bradley Center for Natural and Artificial Intelligence, has been writing about evolution for years as well. Having read my post “The Return of John Derbyshire,” she points out something I hadn’t quite grasped. Now a lightbulb goes off. 

Inherent, Unavoidable

The thread of racism in Darwinian thinking isn’t a chance thing, a mere byproduct of Charles Darwin’s personal views as a “man of his time.” You think if Darwinism had emerged not in the dark age of the 19th century but in our own woke era, it would be different? No, it wouldn’t. The racism is inherent, unavoidable:
That’s just the difference between Darwinism and any type of creationism OR non-Darwinian approaches to evolution. In a Darwinian scheme, someone must be the official subhuman.
I have been trying to get that across for years. It’s why Darwinism can never get away from racism. Racism is implicit in the Darwinian belief system about how things happen.  Even if one’s creationism amounts to no more than the idea that humans have an immortal soul, it makes all humans equal for all practical purposes. Take that away and believe instead that humans are animals that slowly evolved from less-than-human creatures and a variety of things happen, none of them conducive to non-racism.
True, O’Leary has been saying this for a long while. As she wrote over at Uncommon Descent last year:
The problem is not merely that Darwin, a man of his age, was a racist. The problem is that his bias resulted in his and others distorting the fossil record to suit a racist worldview.
Those issues with the fossil record are the theme of most of paleontologist Günter Bechly’s recent writing at Evolution News. She goes on:
To “get past” the fact that Darwin was a racist, we must be willing to undo science that begins by assuming that non-European features are sub-human. But the “hierarchy of man” is rooted in the fundamental assumptions of the “Descent of Man,” the idea that Darwin popularized. Rooting it out would call so many things into question as to produce a crisis. What will we be left with?
Indeed. But then an even bigger problem looms: In any Darwinian scheme, someone must be the subhuman. If not the current lot (formerly, the “savages,” currently the Neanderthals and/or Homo erectus), who will it be?
If they aren’t found, the Darwinist is looking down the maw of some sort of creationism. It need not be theistic creationism. But it does mean that a momentous event happened with explicable swiftness, like the Big Bang or the origin of language, findings naturalists do not like precisely because of their creationist implications.
Surely these are the true reasons Darwinists simply can’t confront the race issue and get past it, and so they resort to long-winded special pleading.

A Minimal Concept

The word “creationism” is used unfairly as a cudgel against proponents of intelligent design. But fine, let’s entertain it for a moment, if only to designate a minimal concept like the philosophical one that says humans, while sharing biological common descent with other creatures, are uniquely endowed with souls bearing some sort of exceptional quality, however you characterize that. It could be a divine image, but need not be. The consistent materialist must deny all this.
The idea of racial equality, perfectly natural to a design perspective, can be achieved by the Darwinist only by continually and ruthlessly suppressing a built-in tendency. It requires bad faith: fooling himself about his own way of thinking. Like an irremediable birth defect, it’s never going to go away.

“Tell It on the Mountain”

Or it’s like driving a car with misaligned wheels that always pulls you in one direction and cannot ever drive straight without your exerting continuous effort toward correction. If you don’t fight it all the time, you’ll go off the road and crash into a nasty, muddy ditch, as folks like John Derbyshire have done. John and other “Race Realists” humiliate their fellow evolutionists by dancing in the muck.
The alternative is to get those wheels nicely aligned. Of course it’s possible to find people who believe in creation and who are also racists. You can find bad apples in any community of thinkers. But the key point is that the two ideas are permanently at odds with each other. Whereas Darwinism and racism are a match made in… Well, they’re conjoined twins, let’s put it that way.
“Klinghoffer,” Denyse advises, “tell Derbyshire he needs a louder mike. He should go tell it on the mountain. Worldwide.” Okay, done.

 

Saturday 4 May 2019

Insect navigator vs. Darwin.

Monarch Butterfly's Sun Compass Investigated in Flight Simulator
Evolution News & Views

The amazing story of how Monarch butterflies navigate from Canada to Mexico is beautifully recounted in Metamorphosis, the Beauty and Design of Butterflies. "Here's a butterfly species that does something truly spectacular," Thomas Emmel says in the film. Flying over two thousand miles without an experienced leader who has made the trip before, millions of these artistically-colored flyers arrive on schedule in the same trees their grandparents or great-grandparents had wintered in the previous year.

Somehow, with a brain the size of a pinhead and a body weighing less than an ounce, the butterfly contains hardware and software for long-range precision navigation, efficient energy utilization and genetic alterations that enable the "Methuselah generation" to survive ten times as long as their parents. Emmel comments that this phenomenon will likely occupy biologists' attention for centuries to come. Researchers at the University of Washington announced this month that they have "cracked the secrets" of one small aspect of this wonder: the monarch's internal time-compensated sun compass.

We saw from the film that the angle of the sun and its traverse through the daylight hours plays a role in the choice of flight direction. Using sunlight to navigate requires processing dynamic inputs:

"Their compass integrates two pieces of information -- the time of day and the sun's position on the horizon -- to find the southerly direction," said Eli Shlizerman, a University of Washington assistant professor. [Emphasis added.]
But how is this information processed in the butterfly's tiny brain? The compass needs to be integrated with the clock.

Monarchs use their large, complex eyes to monitor the sun's position in the sky. But the sun's position is not sufficient to determine direction. Each butterfly must also combine that information with the time of day to know where to go. Fortunately, like most animals including humans, monarchs possess an internal clock based on the rhythmic expression of key genes. This clock maintains a daily pattern of physiology and behavior. In the monarch butterfly, the clock is centered in the antennae, and its information travels via neurons to the brain.
We saw electron micrographs of the antennae in the film. Covered with scales and tiny hairs, these organs are the sites for odor detection, enabling the monarchs to find mates and the females to find their host plants from miles away. Now we learn they are clocks, too:

In their model, two neural mechanisms -- one inhibitory and one excitatory -- controlled signals from clock genes in the antennae. Their model had a similar system in place to discern the sun's position based on signals from the eyes. The balance between these control mechanisms would help the monarch brain decipher which direction was southwest.
Using models and live butterflies in a flight simulator, the researchers determined how they get back on track if blown off course. There's a "separation point" that determines whether the individual will turn left or right. That point changes throughout the day. The insect's onboard computer won't let it cross the separation point, even if it requires a longer path to get back on course. The scientists also shed light on how the butterflies readjust their compasses for the return trip.

Their model also suggests a simple explanation why monarch butterflies are able to reverse course in the spring and head northeast back to the United States and Canada. The four neural mechanisms that transmit information about the clock and the sun's position would simply need to reverse direction.
"And when that happens, their compass points northeast instead of southwest," said Shlizerman. "It's a simple, robust system to explain how these butterflies -- generation after generation -- make this remarkable migration."

Simple, simply -- while appreciating the insights our neighbors at UW have gained, such talk is premature. There's a lot more going on. For one thing, not all individual butterflies head southwest and return northeast. The flight paths from Canada begin on different bearings and converge into a narrow flight path in southern Texas. Also, as Metamorphosis shows, a few individuals take a direct route across the Gulf of Mexico. Then, they all switch direction at a certain point, heading west through a pass toward the Trans-Volcanic Range. Then, within those mountains, they converge on specific trees in 12 known overwintering spots.

Time of day and sun position are necessary, but not sufficient, to explain these precise behaviors. If the researchers feel they have "cracked the secret" of maintaining a particular bearing, they have left all these other secrets unexplained. Their model only shows the bare outlines of a sophisticated, integrated system. The paper in Cell Reports says:

A long-standing, fundamental question about monarch migration has been how the circadian clock interacts with the changing position of the sun to form a time-compensated sun compass that directs flight. Here, we propose a neuronal model for both encoding of the sun's azimuthal position, and molecular timekeeping signals, and how they can be compared to form a time-compensated sun compass. Our results propose a simple neural mechanism capable of producing a robust time-compensated sun compass navigation system through which monarch butterflies could maintain a constant heading during their migratory flight.
But as we just said, the butterflies don't keep a constant heading. They make turns. And what tells individuals in Minnesota to head south and individuals in Pennsylvania to head southwest? Do their offspring for the three generations on the return trip keep the target information embedded in their memories? A compass is no good without a map. Both are no good without a brain to read them. Furthermore, as we reported about Bogong moths, some lepidopterans can reach their long-range destinations in the dark!

The researchers identified four genes that keep time by oscillating their expression levels. They identified how the compound eye determines azimuth and bearing even when the sun's position is changing throughout the day. A butterfly, however, needs much more. It needs controls for altitude, pitch, and yaw. It needs a map of where it has to go. And it needs to monitor resources to ensure it has the energy to get there.

Nevertheless, we're glad for any and all research that sheds light on the details of monarch butterfly navigation, especially when it can lead to intelligently designed applications that benefit everyone:

The model that we have built closes the loop between the time and azimuth stimuli and orientation control. As such, it provides an important framework for future studies of the monarch sun compass. Our framework can be used to design electrophysiological and flight recordings experiments to compare responses in monarchs' neurons and model units and to determine the detailed architecture of neural circuits that implement the integration mechanism postulated by our model. It also provides a simple mechanism for navigation that can be used in devices that do not have the benefit of a global positioning system.

A complex, functional system that, when partially understood, can be "used to design" something else must have been designed itself.

Wednesday 24 April 2019

How OOL science can earn some respectability.

Announcing New Sewell Prize in Biological Origins Studies!
David Klinghoffer | @d_klinghoffer

Over at Uncommon Descent, mathematician Granville Sewell announces a new prize he will be administering. It sets a fascinating challenge but offers an enviable reward:

I plan to award a prize to anyone who can invent a non-trivial 3D machine which can replicate itself. The machine must be able to make copies of itself without human intervention, except possibly to supply the raw materials. Basically a 3D printer which can print a copy of itself which retains the ability to print a copy of itself, which… A page which can be photocopied does not count, because it is the photocopier which actually makes the copy, unless the photocopy machine also makes a copy of itself; a computer program which duplicates itself does not count unless the computer it runs on makes a copy of itself also.

The reward isn’t cash but something even better: 

The prize: the right to speculate about how life originated. Leave your artificial species alone and see how many generations it lasts before going extinct. If it makes accurate copies of itself for 100+ generations, then you also get to speculate about how genetic duplication errors might accumulate into major evolutionary advances.

Now go to it, you proponents of unguided abiogenesis and evolution through all sorts of brute unintelligent, and only unintelligent, natural forces. The first ever Sewell Prize in Biological Origins Studies awaits!

Blinded by fuzzy math?

Behe on Darwinism’s “Socially Inherited Dependence on Classical Yet Irrelevant Math”
David Klinghoffer | @d_klinghoffer

“Natural selection, irreducible complexity, and random mutation all reinforce each other to make Darwinian evolution very limited,” explains biochemist Michael Behe on a new ID the Future episode. That means Behe must tackle a tough question: if the limits of Darwinian evolution are so clear, as he shows in his new book Darwin Devolves, why do gifted scientists like Richard Lenski and Joseph Thornton persist in hanging on to the failed theory, against the evidence of their own experiments? 


Talking with host Andrew McDiarmid, Behe traces the errant thinking to an outdated mathematical picture taken from Ronald Fisher (1890-1962) and his 1930 book, The Genetical Theory of Natural Selection. This highly influential work was written before scientists knew what genes actually are. It’s not surprising that Fisher missed the role of broken genes in containing the creative potential of unguided evolution.

Through “a socially inherited dependence on classical yet irrelevant math,” present-day researchers are saddled with Fisher’s own “hopeful ignorance.” Download the podcast or listen to it here.

A fatwa from one of Darwinism's ayatollah's

Do Evolutionary Principles Derive from Observation, or Are They Imposed on It?
Evolution News & Views February 25, 2016 11:25 AM

Geneticist Dan Graur of the University of Houston is implacably opposed to intelligent design -- and brutally honest about his opposition, which makes for bracing clarity in what he writes. Recently he tweeted the following 12 principles of evolution, which he drafted.

You have to love Graur's principles, because they capture orthodox neo-Darwinism in its purest form:

All of Evolutionary Biology in 12 Paragraphs, 237 Words, and 1,318 Characters

Evolutionary biology is ruled by handful of logical principles, each of which has repeatedly withstood rigorous empirical and observational testing.


The rules of evolutionary biology apply to all levels of resolution, be it DNA or morphology.


New methods merely allow more rapid collection or better analysis of data; they do not affect the evolutionary principles.


The only mandatory attribute of the evolutionary processes is a change in allele frequencies.


All novelty in evolution starts as a single mutation arising in a single individual at a single time point.


Mutations create equivalence more often than improvement, and functionlessness more often than functionality.


The fate of mutations that do not affect fitness is determined by random genetic drift; that of mutations that do affect fitness by the combination of selection and random genetic drift.


Evolution occurs at the population level; individuals do not evolve. An individual can only make an evolutionary contribution by producing offspring or dying childless.


The efficacy of selection depends on the effective population size, an historical construct that is different from the census population size, which is a snapshot of the present.


Evolution cannot create something out of nothing; there is no true novelty in evolution.


Evolution does not give rise to "intelligently designed" perfection. From an engineering point of view, most products of evolution work in a manner that is suboptimal.


Homo sapiens does not occupy a privileged position in the grand evolutionary scheme.


It's fun to play around with these. Take principle 10 seriously, for instance, and taxonomically restricted genes (so-called "orphans") represent a significant puzzle to neo-Darwinian evolution. The deepest question, of course, is this: Do these principles -- in particular, numbers 4 through 12 -- derive from observations, or are they imposed on observations?

Historical parallel:

And we recall that Galileo never made use of Kepler's ellipsi, but remained to the end a true follower of Copernicus who had said "the mind shudders" at the supposition of noncircular, nonuniform celestial motion.

(Gerald Holton, Thematic Origins of Scientific Thought, Harvard University Press, p. 64)

Indeed, one could take this catechism to London in November, to the Royal Society meeting on alternatives to neo-Darwinism, as a template for where heresy may be expected to arise.

The end of innocence?

"People Are Starting to See Scientists the Way They Really Are"


On the echo chamber called settled science.

How Consensus Can Blind Science
Evolution News & Views 

The ruins of Mayan civilization impress anyone who visits. For Harvard astronomer Avi Loeb, they spoke to him about the philosophy of science then and now. Recounting his experience for  Nature, he describes the questions that came to his mind:

This summer, I visited the Mayan city of Chichén Itzá in the Yucatán Peninsula, Mexico. It has an ancient observatory where priest-astronomers made detailed astronomical observations around AD 600-1200. The ruins -- stepped pyramids, temples, columned arcades and other stone structures -- reveal that astronomy was at the heart of this sophisticated society.
The Mayans accurately tracked changes in the positions and relative brightness of the Sun, Moon, planets and stars. They documented their astronomical data in folding books called codices, with many more quantitative details than other civilizations at the time. The priest-astronomers used observations and advanced mathematical calculations to predict eclipses, and devised a 365-day solar calendar that was off by just one month every 100 years.

So why, I wondered, didn't the Mayans go further and infer aspects of our modern understanding of astronomy? They determined the orbital periods of Venus, Mars and Mercury around the Sun, but Earth was at the centre of their Universe.

Dr. Loeb knows he is a leading figure in today's consensus views in astronomy and cosmology. But unlike some others in his profession, he did not merely dismiss the Mayan achievements as works of a backward people lost in religious superstition. Turning his contemplations inward, he asked, "Have we learned our lesson, or is today's science similarly trapped by cultural and societal forces?"

Loeb pondered how the Mayans' detailed calculations were used to support a civilization that engaged in human sacrifice and planned warfare by the stars. "I came to appreciate how limiting prevailing world views can be," he says. In their day, Mayan astronomers were held in high social esteem. So are today's astronomers. Loeb thought about how "Cosmologists today collect vast amounts of exquisite data in surveys of large parts of the sky, costing billions of dollars." But Good data are not enough, he realized.

What might have helped the Mayans break out of the box of their prevailing world view? He hit upon a solution that should be of great interest to the intelligent design community.

The consequences of a closed scientific culture are wasted resources and misguided 'progress' -- witness the dead end that was Soviet evolutionary biology. To truly move forward, free thought must be encouraged outside the mainstream. Multiple interpretations of existing data and alternative motivations for collecting new data must be supported.
We hasten to clarify that so far as we know, Dr. Loeb is not an advocate of intelligent design, even though he has proposed a design inference for SETI . His advice, though, harks back to Darwin's own admonition: "A fair result can be obtained only by fully stating and balancing the facts and arguments on both sides of each question." The question in Darwin's day was design v. naturalism, as it is today. When it comes to origins, there aren't any other options. Either the universe is designed, or it is not. Would it not be limiting to the prevailing world view to assert that naturalism is the only side of the question?

Loeb gives a brief catalog of massive projects being funded by NASA and ESA. But then he gives a Kuhnian view of these efforts. Today's astronomers and cosmologists work within a paradigm as members of a guild who cannot see outside their project. Anomalies do not alarm them, as they should. Instead, they focus their attention of puzzle-solving to establish the paradigm -- not to overthrow it.

Such projects have a narrow aim -- pinning down the parameters of one theoretical model. The model comprises an expanding Universe composed of dark matter, dark energy and normal matter (from which stars, planets and people are made), with initial conditions dictated by an early phase of rapid expansion called cosmic inflation. The data are reduced to a few numbers. Surprises in the rest are tossed away.
I noticed this bias recently while assessing a PhD thesis. The student was asked to test whether a data set from a large cosmological survey was in line with the standard cosmological model. But when a discrepancy was found, the student's goal shifted to explaining why the data set was incomplete. In such a culture, the current model can never be ruled out, even though everyone knows that its major constituents (dark matter, dark energy and inflation) are not understood at a fundamental level.

He's absolutely right; in recent news reports, dark matter remains as mysterious as ever. Astronomers have leapfrogged from WIMPs to MACHOs to axions, finding no evidence for any of them, despite expensive searches with super-sensitive detectors. Perhaps a candidate particle will turn up some day, but without empirical evidence, have physicists improved on Mayan divination? Dark energy is even more mysterious. Nobody has any idea what that is. And there's more darkness:

How each culture views the Universe is guided by its beliefs in, for example, mathematical beauty or the structure of reality. If these ideas are deeply rooted, people tend to interpret all data as supportive of them -- adding parameters or performing mathematical gymnastics to force the fit. Recall how the belief that the Sun moves around Earth led to the mathematically beautiful (and incorrect) theory of epicycles advocated by the ancient Greek philosopher Ptolemy.
Similarly, modern cosmology is augmented by unsubstantiated, mathematically sophisticated ideas -- of the multiverse, anthropic reasoning and string theory. The multiverse idea postulates the existence of numerous other regions of space-time, to which we have no access and in which the cosmological parameters have different values.

We can become "blinded by beauty" that keeps us attracted to our own beliefs. Loeb mentions some paradoxes in cosmology that hint the consensus is on the wrong track. Modern cosmology is not knowledge, he says; it is organized ignorance! Only openness to other approaches can help reveal the extent of it.

Cultivating other approaches avoids stalling progress by investing only in chasing what might turn out to be 'epicycles'. After all, the standard model of cosmology is merely a precise account of our ignorance: we do not understand the nature of inflation, dark matter or dark energy. The model has difficulties accounting for the luminous gas and stars that we can see in galaxies, while leaving invisible what we can easily calculate (dark matter and dark energy). This state of affairs is clearly unsatisfactory.
The only way to work out whether we are on the wrong path is to encourage competing interpretations of the known data.

ID proponents might take advantage of Loeb's appeal in a way Paul Nelson relates in the new film Origin:

Often, when I'm reading the origin-of-life literature, considering some proposal for the formation, let's say, of RNA or proteins -- and even the author admits it's chemically implausible -- and I ask myself, How did he get himself into this jam?...
And at that impasse I want to say to the author, Look you know it's implausible, I know it's implausible, the reason that you ended up here standing at the edge of this cliff staring at implausibility is because way back down the road you decided that only a materialist explanation would work.

Dr. Timothy Standish adds:

How many different explanations that fail within the materialistic box do we have to have before we decide, maybe we need to move outside of that box. Maybe we need another kind of explanation.
When it comes to cosmology, Dr. Loeb is painfully aware of implausibilities in the naturalistic box: dark matter, dark energy, inflation, the multiverse, and the Anthropic Principle. But he is not just standing at the cliff, staring at implausibility; he is welcoming new thinking outside the box. It's an opportunity for a meaningful discussion of alternatives. And it was published in Nature, the world's leading science journal.

A healthy dialogue between different points of view should be fostered through multidisciplinary conferences that discuss conceptual issues, not just experimental results and phenomenology. A diversity of views fosters healthy progress and prevents stagnation. In September, I had the privilege of founding an interdisciplinary centre, the Black Hole Initiative at Harvard University in Cambridge, Massachusetts, which brings together astronomers, physicists, mathematicians and philosophers. Our experience is that a mix of scholars with different vocabularies and comfort zones can cultivate innovation and research outside the box. Already the centre has prompted exciting insights on the reality of naked singularities in space-time, the prospects for imaging black-hole silhouettes and the information paradox.
The "information paradox" refers to the conservation of information in the case of black holes. Think about what ID has to offer the admittedly stagnant field of consensus cosmology:

Theory of information: its source and conservation

A cause to explain the fine-tuning of the universe (e.g., see New Scientist) without the "cop-out" of the Anthropic Principle (New Scientist)

Mathematics, as seen in Origin's probability calculations for the origin of life, the Universal Probability Bound, and the "No Free Lunch" theorems

Philosophy of science unlimited by naturalism

A different vocabulary and comfort zone

Research outside the box

Loeb concludes, "Such simple, off-the-shelf remedies [i.e., multidisciplinary conferences] could help us to avoid the scientific fate of the otherwise admirable Mayan civilization." Would he extend his hand far enough to consider design? Perhaps not. But while the rise and fall of Mayan civilization is on his mind, leading him to ponder modern cosmology's potential for worldview blindness, the time seems right to engage the discussion.