Search This Blog

Saturday, 25 May 2019

Yet more on mindless intelligence.

More on "Intelligent Design Without a Designer"
Brendan Dixon April 3, 2016 3:22 AM

A reader, Eric, writes to ask about my post "Nature 'Learns' to Create Complexity? Think Again":

Like Brendan, I work on developing computer models and software for scientific applications. His comments about models vs. reality were, of course, spot on. It did occur to me that there is one possible partial response to a point that was made. Brendan writes:

Worse, whatever is "learned" by that organism must be passed along to the children. So, unless the paper's authors subscribe to some form of Lamarckism, which is exceedingly unlikely, without a clear inheritance path all the training value is lost.

A neo-Darwinist could respond with the same kind of switch that the synthesis used to rescue the original version of Darwinism. Darwinism originally did have a Lamarckian flavor to it (which is why it opposed and resisted Mendelian genetics). With the neo-Darwinian synthesis, they dropped the Lamarkian aspects, embraced genetics, and proposed that the gains were built in through genetic variations.

In a similar way, someone now could say that the "learning" isn't Lamarkian. Rather, it is the genetic variations to GRNs that succeed in being passed on (in contrast to those that were weeded out). The correspondence with environmental conditions comes in the form of the natural selection filtering that comes after the fact of the genetic change.

Yet, if that is how it should be understood, the issues raised in the preceding paragraph about training "over time and space" are still relevant. The genetic "learning" would not be limited to the life of any single individual, but it still would be constrained to accumulating within a line of ancestry.

That would make the "learning" generational in nature. Instead of a single copy of a network collecting "millions of data sets," one would have to look along an organism's ancestral line to observe the inherited network receiveing a multitude of tweaks down through the successful generations.

At first glimpse, it might seem that this way of viewing it could succeed at making it work for the neo-Darwinist. However, it does come back to the details, such as the limited amount of successful tweaking that can take place in any single generation and the limited number of generations down to the present organism.

Quite simply, there does still seem to be a serious problem about accumulating the extent of learning that would be needed.

The other main thought I had was this. You cannot have a second time without having a first time. You cannot learn to use building blocks more effectively apart from first having building blocks that can be used somewhat effectively.

In my software, I know the value of modularization and being able to recombine components, but that doesn't happen by accident. I wouldn't be able to reuse composable parts if I did not first have such parts that allowed from recombination. Flexibility comes by design, and before one could have a flexible, adjustable, "educated" GRN, one would need first to have a GRN that from the start flexibly allows for "learning." Otherwise, one could not get off the ground.

Even granting for the sake of argument that GRNs can learn across generations, how do we expect that a learning GRN can first come to exist?

Darwinists are forever pushing their present problems aside by building on the idea that whatever they need already existed earlier (without explanation), whether it is functional information or GRNs that can learn.

By the way, if you haven't seen the movie Next, staring Nicholas Cage, I would suggest giving it a look. (It's available for streaming on Netflix.)

It's not a "great" movie in any sense, but it does do a very clever job at cinematically portraying what is essentially a visual allegory to the evolutionary perspective of a finding a winning path that has been filtered from among many other failed attempts. As with evolutionary thought, the attempts are temporally concurrent.

People who have seen this will more easily and more intuitively feel that such a strategy could work (so long as one does not delve into the messy details).

I appreciate Eric's thoughts. I suppose the answer depends on how the paper's authors envision the GRN learning. If it is no different than modification + selection that preserves those GRNs producing "better" phenotypes, then Eric's point is sound. And he is correct in that case to point out that an entire ancestral chain matters. If the GRN learns dynamically (which I don't believe was clear in the original paper), then inheritance remains a significant problem.

However, if Eric's perspective is correct and a GRN "learns" through descent with modification, the analogy with Neural Networks breaks down. Neural Networks succeed by changing the entire network in response to massive data sets (along with some notion of correctness). For example, imagine two critters: The GRN in one "learns" to adapt part of the network while the other adapts the other part. They cannot share that information (unless you can specify a mechanism by which they can mate and successfully blend what both "learned"). And adaptation in one is made without respect for the adaptation in the other. NNs succeed because the entire network adapts as a unit; that is, each node learns how to respond to inputs based on how the entire network has faired. You cannot teach one node at a time or even part of the network at a time -- the entire network responds. Training a NN one node at a time will, by definition, fail.

And, as Eric agrees, the quantity of data to succeed is massive. AlphaGo contained "millions" of connections across 12 (or 13?) layers (networks) in two different NNs. AlphaGo was originally fed roughly 30 million moves from 160,000 games, yet that was far from sufficient. It required more training by playing itself (I failed to find the number of moves evaluated during this phase, but I suspect it completed many more than 160,000 games). And then, to allow the search tree to run deep, they distributed its execution across over 2,000 CPUs (some of which were the specialized, high-performance CPUs used in video games). Even then, a small number of "hand crafted" rules lurked within (see here).

Comparing GRNs to NNs remains uninteresting given what we've learned about training NNs. If anything, what we've learned says that GRNs could not have been self-taught since they just don't have access to the necessary data or a means by which to ensure the entire GRN benefits from the training.

The quantity of data needed to train an NN is proportional (though not linearly) to the number of connections within the network. The more connections a network contains, the more training data necessary.

I do not know how many nodes or connections a GRN contains, but, in a way, it is immaterial: If the number of connections is small, the network cannot "recognize" anything significant. If it is large, then the data requirements will rapidly exceed that available. A small, trainable network will be uninteresting and a large, interesting network will be untrainable. (I like that statement. I believe it captures the challenge succinctly with a bit of rhythm.)


If someone were to respond by saying, "Ah, but we have thousands of GRNs," then they have a large network (where each node is itself a GRN, something akin to a deep learning problem) that will require massive quantities of data.

Patron saint of the master race?

Why Darwinism Can Never Separate Itself from Racism

 David Klinghoffer | @d_klinghoffer

 

It’s great to have insightful colleagues. Denyse O’Leary, with Discovery Institute’s Bradley Center for Natural and Artificial Intelligence, has been writing about evolution for years as well. Having read my post “The Return of John Derbyshire,” she points out something I hadn’t quite grasped. Now a lightbulb goes off. 

Inherent, Unavoidable

The thread of racism in Darwinian thinking isn’t a chance thing, a mere byproduct of Charles Darwin’s personal views as a “man of his time.” You think if Darwinism had emerged not in the dark age of the 19th century but in our own woke era, it would be different? No, it wouldn’t. The racism is inherent, unavoidable:
That’s just the difference between Darwinism and any type of creationism OR non-Darwinian approaches to evolution. In a Darwinian scheme, someone must be the official subhuman.
I have been trying to get that across for years. It’s why Darwinism can never get away from racism. Racism is implicit in the Darwinian belief system about how things happen.  Even if one’s creationism amounts to no more than the idea that humans have an immortal soul, it makes all humans equal for all practical purposes. Take that away and believe instead that humans are animals that slowly evolved from less-than-human creatures and a variety of things happen, none of them conducive to non-racism.
True, O’Leary has been saying this for a long while. As she wrote over at Uncommon Descent last year:
The problem is not merely that Darwin, a man of his age, was a racist. The problem is that his bias resulted in his and others distorting the fossil record to suit a racist worldview.
Those issues with the fossil record are the theme of most of paleontologist Günter Bechly’s recent writing at Evolution News. She goes on:
To “get past” the fact that Darwin was a racist, we must be willing to undo science that begins by assuming that non-European features are sub-human. But the “hierarchy of man” is rooted in the fundamental assumptions of the “Descent of Man,” the idea that Darwin popularized. Rooting it out would call so many things into question as to produce a crisis. What will we be left with?
Indeed. But then an even bigger problem looms: In any Darwinian scheme, someone must be the subhuman. If not the current lot (formerly, the “savages,” currently the Neanderthals and/or Homo erectus), who will it be?
If they aren’t found, the Darwinist is looking down the maw of some sort of creationism. It need not be theistic creationism. But it does mean that a momentous event happened with explicable swiftness, like the Big Bang or the origin of language, findings naturalists do not like precisely because of their creationist implications.
Surely these are the true reasons Darwinists simply can’t confront the race issue and get past it, and so they resort to long-winded special pleading.

A Minimal Concept

The word “creationism” is used unfairly as a cudgel against proponents of intelligent design. But fine, let’s entertain it for a moment, if only to designate a minimal concept like the philosophical one that says humans, while sharing biological common descent with other creatures, are uniquely endowed with souls bearing some sort of exceptional quality, however you characterize that. It could be a divine image, but need not be. The consistent materialist must deny all this.
The idea of racial equality, perfectly natural to a design perspective, can be achieved by the Darwinist only by continually and ruthlessly suppressing a built-in tendency. It requires bad faith: fooling himself about his own way of thinking. Like an irremediable birth defect, it’s never going to go away.

“Tell It on the Mountain”

Or it’s like driving a car with misaligned wheels that always pulls you in one direction and cannot ever drive straight without your exerting continuous effort toward correction. If you don’t fight it all the time, you’ll go off the road and crash into a nasty, muddy ditch, as folks like John Derbyshire have done. John and other “Race Realists” humiliate their fellow evolutionists by dancing in the muck.
The alternative is to get those wheels nicely aligned. Of course it’s possible to find people who believe in creation and who are also racists. You can find bad apples in any community of thinkers. But the key point is that the two ideas are permanently at odds with each other. Whereas Darwinism and racism are a match made in… Well, they’re conjoined twins, let’s put it that way.
“Klinghoffer,” Denyse advises, “tell Derbyshire he needs a louder mike. He should go tell it on the mountain. Worldwide.” Okay, done.