the bible,truth,God's kingdom,Jehovah God,New World,Jehovah's Witnesses,God's church,Christianity,apologetics,spirituality.
Saturday, 20 May 2017
On our latter day frankensteins and the end of science.
Swarm" Science: Why the Myth of Artificial Intelligence Threatens Scientific Discovery
Erik J. Larson
In the last year, two major well-funded efforts have launched in Europe and in the U.S. aimed at understanding the human brain using powerful and novel computational methods: advanced supercomputing platforms, analyzing peta- and even exabyte datasets, using machine learning methods like convolutional neural networks (CNNs), or "Deep Learning."
At the Swiss Federal Institute of Technology in Lausanne (EPFL), for instance, the Human Brain Project is now underway, a ten-year effort funded by the European Commission to construct a complete computer simulation of the human brain. In the U.S., the Obama Administration has provided an initial $100 million in funding for the newly launched Brain Research Through Advanced Neurotechnologies (BRAIN) Initiative, with funding projected to reach $3 billion in the next ten years. Both projects are billed as major leaps forward in our quest to gain a deeper understanding of the brain -- one of the last frontiers of scientific discovery.
Predictably, given today's intellectual climate, both projects are premised on major confusions and fictions about the role of science and the powers of technology.
The myth of evolving Artificial Intelligence, for one, lies at the center of these confusions. While the U.S. BRAIN Initiative is committed more to the development of measurement technologies aimed at mapping the so-called human connectome -- the wiring diagram of the brain viewed as an intricate network of neurons and neuron circuits -- the Human Brain Project more explicitly seeks to engineer an actual, working simulation of a human brain.
The AI myth drives the HBP vision explicitly, then, even as ideas about Artificial Intelligence and the powers of data-driven methods (aka "Big Data") undergird both projects. The issues raised today in neuroscience are large, significant, and profoundly troubling for science. In what follows, I'll discuss Artificial Intelligence and its role in science today, focusing on how it plays out so unfortunately in neuroscience, and in particular in the high-visibility Human Brain Project in Switzerland.
AI and Science
AI is the idea that computers are becoming intelligent in the same sense as humans, and eventually to even a greater degree. The idea is typically cast by AI enthusiasts and technologists as forward-thinking and visionary, but in fact it has profoundly negative affects on certain very central and important features of our culture and intellectual climate. Its eventual effects are to distract us from using our own minds.
The connection here is obvious, once you see it. If we believe that the burden of human thinking (and here I mean, particularly, explaining the world around us) will be lessened because machines are rapidly gaining intelligence, the consequence to science if this view is fictitious can only be to diminish and ultimately to imperil it.
At the very least, we should expect scientific discovery not to accelerate, but to remain in a confused and stagnant state with this set of ideas. These ideas dominate today.
Look at the history of science. Scientists have grand visions and believe they can explain the world by contact of the rational mind with nature. One thinks of Einstein, but many others as well: Copernicus, Galileo, Newton, Maxwell, Hamilton, Heisenberg, even Watson and Crick.
Copernicus, for instance, became convinced that the entire Ptolemaic model of the solar system stemmed from false theory. His heliocentric model is a case study in the triumph of the human mind not to analyze data but effectively to ignore it -- seeking a more fundamental explanation of observation in a rational vision that is not data-driven but prior to and more fundamental than what we collect and view (the "data"). Were computers around back then, one feels that Copernicus would have ignored their results too, so long as they were directed at analyzing geocentric models. Scientific insight here is key, yesterday and today.
Yet the current worldview is committed, incessantly and obsessively, to reducing scientific insight to "swarms" of scientists working on problems, by each making little contributions to a framework that is already in place. The Human Brain Project here is paradigmatic: the "swarm" language is directly from a key HBP contributor Sean Hill (in the recent compilation The Future of the Brain edited by Gary Marcus, whom I like).
The swarm metaphor evokes thoughts of insects buzzing around, fulfilling pre-ordained roles. So if we're convinced that in a Human-Technology System the "technology" is actually becoming humanly intelligent (the AI myth), the set of social and cultural beliefs begin to change to accommodate a technology-centered worldview. This, however, provides very little impetus for discovery.
To the extent that individual minds aren't central to the technology-driven model of science, then "progress" based on "swarm science" further reinforces the belief that computers are increasingly responsible for advances. It's a self-fulfilling vision; the only problem is that fundamental insights, not being the focus anyway, are also the casualties of this view. If we're living in a geocentric universe with respect to, say, neuroscience still, the model of "swarm" science and data-driven analysis from AI algorithms isn't going to correct us. That's up to us: in the history of science, today, and in our future.
An example. Neuroscientists are collecting massive datasets from neural imaging technologies (not itself a bad thing), believing that machine-learning algorithms will find interesting patterns in the data. When the problem is well defined, this makes sense.
But reading the literature, it's clear that the more starry-eyed among the neuroscientists (like Human Brain Project director Henry Markram) also think that such an approach will obviate the need for individual theory in favor of a model where explanation "emerges" from a deluge of data.
This is not a good idea. For one thing, science doesn't work that way. The "swarm-and-emerge" model of science would seem ridiculous were it not for the belief that such massive quantities of data run on such powerful computing resources ("massive" and "powerful" is part of the emotional component of this worldview) could somehow replace traditional means of discovery, where scientists propose hypotheses and design specific experiments to generate particular datasets to test those hypotheses.
Now, computation is supposed to replace all that human-centered scientific exploration -- Markram himself has said publicly that the thousands of individual experiments are too much for humans to understand. It may be true that the volume of research is daunting, but the antidote can hardly be to force thousands of labs to input data into a set of APIs that presuppose a certain, particular theory of the brain! (This is essentially what the Human Brain Project does.) We don't have the necessary insights, yet, in the first place.
Even more pernicious, the belief that technology is "evolving" and getting closer and closer to human intelligence gives less and less an impetus to people to fight for traditional scientific practice, centered on discovery. If human thought is not the focus anymore, why empower all those individual thinkers? Let them "swarm," instead, around a problem that has been pre-defined.
This too is an example of how the AI myth also encourages a kind of non-egalitarian view of things, where a few people are actually telling everyone else what to do, even as the model is supposed to be communitarian in spirit. This gets us a little too far off topic presently, but is a fascinating case study in how false narratives are self-serving in subtle ways.
Back to science. In fact the single best worldview for scientific discovery is simple: human minds explain data with theory. Now, but only after we have this belief, we can and should insert: and our technology can help us. Computation is a tool -- a very powerful one, but as it isn't becoming intelligent in the sense of providing theory for us, we can't then jettison our model of science, and begin downplaying or disregarding the theoretical insights that scientists (with minds) provide.
This is a terrible idea. It's just terrible. It's no wonder that any major scientific successes in the last decade have been largely engineering-based, like the Human Genome Project. No one has the patience, or even the faith, to fund smaller-scale and more discovery-based efforts.
The idea, once again, is that the computational resources will somehow replace traditional scientific practice, or "revolutionize it" -- but as I've been at pains to argue, computation isn't "smart" in the way people are, and so the entire AI Myth is not positive, or even neutral, but positively threatening to real progress.
The End of Theory? Maybe So
Hence when Chris Andersen wrote in 2007 that Big Data and super computing (and machine learning or i.e., induction) meant the "End of Theory," he echoed the popular Silicon Valley worldview that machines are evolving a human -- and eventually a superhuman -- intelligence, and he simultaneously imperiled scientific discovery. Why? Because (a) machines aren't gaining abductive inference powers, and so aren't getting smart in the relevant manner to underwrite "end of theory" arguments, and (b) ignoring the necessity of scientists to use their minds to understand and explain "data" is essentially gutting the central driving force of scientific change.
To put this yet again on more practical footing, over five hundred neuroscientists petitioned the EU last year because a huge portion of funding for fundamental neuroscience research (over a billion euro) went to the Human Brain Project, which is an engineering effort that presupposes that fundamental pieces of theory about the brain are in place. The only way a reasonable person could believe that, is if he were convinced that the Big Data/AI model would yield those theoretical fruits somehow along the way. When pressed, however, the silence as to how exactly that will happen is deafening.
The answer Markram and others want to provide -- if only sci-fi arguments worked on EU officials or practicing neuroscientists -- is that the computers will keep getting "smarter." And so that myth is really at the root of a lot of current confusion. Make no mistake, the dream of AI is one thing, but the belief that AI is around the corner and inevitable is just a fiction, and potentially a harmful one at that.
To chance and necessity be the glory?
Moths Defy the Possible
Evolution News & Views
How do you make choices in a data-poor environment? Imagine being in a dark room in total silence. Every few seconds, a tiny flash of light appears. You might keep your eyes open as long as possible to avoid missing any of them. You might watch the flashes over time to see if there's a pattern. If you see a pattern, you might deduce it will lead to further information.
The ability to navigate this way in a dim world is called a summation strategy. "This slowing visual response is consistent with temporal summation, a visual strategy whereby the visual integration time (or 'shutter time') is lengthened to increase visual reliability in dim light," Eric Warrant explains in Science. He's discussing how hawkmoths perform "Visual tracking in the dead of night," and he's clearly impressed by how amazingly well insects "defy the possible" as they move through the world:
Nocturnal insects live in a dim world. They have brains smaller than a grain of rice, and eyes that are even smaller. Yet, they have remarkable visual abilities, many of which seem to defy what is physically possible. On page 1245 of this issue, Sponberget al. reveal how one species, the hawkmoth Manduca sexta, isable to accurately track wind-tossed flowers in near darknessand remain stationary while hovering and feeding. [Emphasis added.]
The hawkmoth has some peers on the Olympic award platform:
Examples of remarkable visual abilities include the nocturnal central American sweat bee Megalopta genalis, which can use learned visual landmarks to navigate from its nest -- aninconspicuous hollowed-out twig hanging in the tangled undergrowth -- through a dark and complex rainforest to a distant source of nocturnal flowers, and then return. The nocturnalAustralian bull ant Myrmecia pyriformis manages similar navigational feats on foot. Nocturnal South African dung beetles can use the dim celestial pattern of polarized light around the moon or the bright band of light in the Milky Way as a visual compass to trace out a beeline when rolling dung balls. Some nocturnal insects, like the elephant hawkmoth Deilephila elpenor, even have trichromatic color vision.
These insects pack a lot of computing power in brains the size of a grain of rice. How do they do it? Part of the answer lies in the fine-tuning between object and sensor:
It turns out that even though the hawkmoths must compromise tracking accuracy to meet the demands of visual motiondetection in dim light, the tracking error remains small exactly over the range of frequencies with which wind-tossed flowers move in the wild. The results reveal a remarkable match between the sensorimotor performance of an animal and the dynamics of the sensory stimulus that it most needs to detect.
A tiny brain imposes real-world constraints on processing speed. The hawkmoth, so equipped, faces limits on sensorimotor performance: how sensitive its eyes are in dim light, how quickly it can perceive motion in the flower, and how fast it can move its muscles to stay in sync. The moth inserts its proboscis into the flower, and if a breeze moves the flower about, the moth has to be able to keep up with it to get its food. To meet the challenge, its brain software includes the "remarkable" ability to perform data summation and path integration fast enough to move with the flower while it feeds.
In their experiments, Sponberg et al. observed hawkmoths in a specially designed chamber. They were able to control light levels and move artificial flowers containing a sugar solution at different speeds. "During experiments, this flower was attached to a motorized arm that moved the flower from side to side in a complex trajectory," Warrant says.
The component movement frequencies of this trajectory varied over two orders of magnitude and encompassed the narrower range of frequencies typical of wind-tossed flowers. A hovering moth fed from the flower by extending its proboscis into the reservoir, rapidly flying from side to side to maintain feeding bystabilizing the moving flower in the center of its visual field.
The experiment allowed the researchers to cross the line from possible to impossible, showing at what point the moth could not keep up. Dimmer light requires longer integration time, while faster motion requires quicker muscle response. Still, these little flyers "tracked the flower remarkably well" by using the temporal summation strategy.
Hummingbirds feed on moving flowers, too, but usually in broad daylight. To find this ability to track a moving food source in a tinier creature possessing a much smaller brain is truly amazing -- especially considering that it has less light to see by.
This strategy has recently been demonstrated in bumblebeesflying in dim light and has been predicted for nocturnalhawkmoths. Although temporal summation sacrifices the perception of faster objects, it strengthens the perception of slower ones, like the slower movement frequencies (below ~2 Hz) of the robotic flower.
... and that just happens to be the maximum speed of the natural flowers in the moth's environment. How did this perfect match arise? Why, natural selection, of course. Here comes the narrative gloss:
By carefully analyzing the movements of several species of flowers tossed by natural winds -- including those favored by hawkmoths -- Sponberg et al. discovered that their movements were confined to frequencies below ~2 Hz. Thus, despite visual limitations in dim light, the flight dynamics and visual summation strategies of hovering hawkmoths have evolved to perfectly match the movement characteristics of flowers, their only food source.The implications of the study go far beyond this particular species. It shows how in small animals like hawkmoths, withlimited nervous system capacities and stretched energy budgets, the forces of natural selection have matched sensory and motorprocessing to the most pressing ecological tasks that animals must perform in order to survive. This is done not by maximizing performance in every possible aspect of behavior, but bystripping away everything but the absolutely necessary andhoning what remains to perform tasks as accurately and efficiently as possible.
Do the experimenters agree with this narrative? They actually have little to say about evolution. Near the end of the paper, they speculate a little:
The frequencies with which a moth can maneuver could provide a selective pressure on the biomechanics of flowers to avoidproducing floral movements faster than those that the moth can track in low light. The converse interaction -- flower motions selecting on the moth -- could also be important, suggesting acoevolutionary relationship between pollinator and plant thatextends beyond color, odor, and spatial features to include motion dynamics.
The evolutionary narrative, though, is unsatisfying. It is only conceptual, not empirical (nobody saw the flower and moth co-evolve). Additionally, flowers could just as well thrive with other pollinators that operate during the daytime. Or, the moth could simply adjust its biological clock to feed in better light, too. The theory, "It is, therefore it evolved" could explain anything.
Warrant also mischaracterizes natural selection as a force. Natural selection is more like a bumper than a force; it's the hub in the pinball game, not the flipper that an intelligent agent uses. It's far easier for a moth to drop the ball in the hole (i.e., go extinct) than to decide what capacities it must stretch to match the hubs in its game. The hub, certainly, cares nothing about whether the player wins or not. It's not going to tell the moth, "Pssst ... strip away everything that's not absolutely necessary, and hone what remains, and you might win!" Personifying natural selection in this way does not foster scientific understanding.
Worst of all, the evolutionary explanation presupposes the existence of highly complex traits that are available to be stripped or honed: flight, a brain, muscles -- the works. You can't hone what isn't there.
What we observe is a tightly adapted relationship between flower and moth that reaches to the limits of the possible. Any stripping or honing comes not from the environment, but from internal information encoded in the organism. Intelligent causes know how to code for robustness, so that a program can work in a variety of circumstances. Seeing this kind of software packed into a computer the size of a grain of rice makes the design inference even more compelling.
Illustra Media's documentary Metamorphosis showed in vibrant color the remarkable continent-spanning migration of the Monarch butterfly. Their new documentary, Living Waters (coming out this summer), shows dramatic examples of long-distance migration and targeting in the oceans and rivers of the earth, where the lack of visual cues makes finding the target even more demanding. The film makes powerful arguments against the abilities of natural selection, and for the explanatory fruitfulness of intelligent design
Why so little evolving across the history of life?
A Good Question from Michael Denton About the Fixity of Animal Body Plans
David Klinghoffer September 9, 2011 6:00 AM
Biochemist Michael Denton (Evolution: A Theory in Crisis, Nature's Destiny: How the Laws of Biology Reveal Purpose in the Universe) was in our offices this week and he casually posed a question that I, for one, had never considered. Hundreds of millions of years ago, all these animal body plans became fixed. They stayed as they were and still are so today.
Before that -- I'm putting this my way, so if I get anything wrong blame me -- of course they had been, under Darwinian assumptions, morphing step-by-step, with painful gradualness. Then they just stopped and froze in their tracks.
The class Insecta with its distinctive segmentation, for example, goes back more than 400 million years to the Silurian period. It gives the impression of a creative personality at work in a lab. He hits on a design he likes and sticks with it. It does not keep morphing.
This is exactly the way I am about recipes. I experiment with dinner plans, discover something I like, and then repeat it endlessly with minor variations from there onward.
Why does the designer or the cook like it that way? Well, he just does. There's no reason that can be expressed in traditional Darwinian adaptive terms. There is no adaptive advantage in this fixity of body plans. Why not keep experimenting and morphing as an unguided, purposeless process would be expected to do? But nature doesn't work that way. It finds a good plan and holds on to it fast, for dear life. This suggests purpose, intelligence, thought, design. Or is there something I'm missing?
David Klinghoffer September 9, 2011 6:00 AM
Biochemist Michael Denton (Evolution: A Theory in Crisis, Nature's Destiny: How the Laws of Biology Reveal Purpose in the Universe) was in our offices this week and he casually posed a question that I, for one, had never considered. Hundreds of millions of years ago, all these animal body plans became fixed. They stayed as they were and still are so today.
Before that -- I'm putting this my way, so if I get anything wrong blame me -- of course they had been, under Darwinian assumptions, morphing step-by-step, with painful gradualness. Then they just stopped and froze in their tracks.
The class Insecta with its distinctive segmentation, for example, goes back more than 400 million years to the Silurian period. It gives the impression of a creative personality at work in a lab. He hits on a design he likes and sticks with it. It does not keep morphing.
This is exactly the way I am about recipes. I experiment with dinner plans, discover something I like, and then repeat it endlessly with minor variations from there onward.
Why does the designer or the cook like it that way? Well, he just does. There's no reason that can be expressed in traditional Darwinian adaptive terms. There is no adaptive advantage in this fixity of body plans. Why not keep experimenting and morphing as an unguided, purposeless process would be expected to do? But nature doesn't work that way. It finds a good plan and holds on to it fast, for dear life. This suggests purpose, intelligence, thought, design. Or is there something I'm missing?
Subscribe to:
Posts (Atom)