Your Computer Doesn't Like You
Michael Egnor December 15, 2014 2:30 AM
Actually, your computer doesn't dislike you, either. Your computer has no opinion about you at all, because it has no opinions whatsoever.
This is news to Stephen Hawking and Elon Musk, who -- as Erik J. Larson has commented here
-- recently have warned humanity that computers are on the verge of
acquiring minds and could take over the world and end mankind.
Computers, of course, cannot "take over the world and end mankind,"
because computers have no intelligent agency at all. Intelligence, as
denoted in "artificial intelligence," corresponds roughly to what
Aristotle meant by intellect and will. Intellect and will are the
rational capabilities of human beings -- the ability to reason, to
contemplate universals such as good and evil and right and wrong, to
love and hate, to judge and intend and carry out decisions arrived at
through reason. These are capabilities of human beings, and only of
human beings.
Inanimate devices have agency too, but they have unintelligent
agency. Computers can store electrons, move electrons about, light up a
screen, boot up, crash, freeze, and so on. Computers can of course be a
tool by which human beings express their own human intelligent
agency. When a person commits bank fraud via a computer, the person,
not the computer, goes to jail. Computers have no intelligent agency of
their own, and never will, any more than the paperweight on your desk
has intelligent agency.
The only way a computer can hurt you, on its own, is if it falls on your foot.
Computers
are electromechanical devices that we use as tools. They differ only in
complexity from other tools like books, which we use to store and
retrieve representations of knowledge. We make tools, and we use tools,
and they serve our ends. We put representations of our intentions and
knowledge and desires and memories and conceptual insights and errors
into computers, and the software that we have written maps our inputs to
outputs, and then we analyze and ponder the outputs. Nowhere in this
process is there the slightest bit of thinking on the part of
the computer. Computers can't think because things like tools -- even
tools made in Silicon Valley -- can't think. Computers are devices we
use for our own purposes, and like all devices, sometimes the
consequences aren't what we expected. Sometimes the book really changes
the way we think about things, and sometimes we drop the book on our
foot. But the consequences of using tools -- and the consequences can on
occasion be transformative for humanity -- are consequences entirely of
human purposes and mistakes.
We've been through this before. After the invention of writing in
Sumer, parchment didn't acquire a mind and inflict evil on humanity. But
writing did change civilization. After the invention of the printing
press, books didn't acquire a mind and inflict evil upon humanity. But
the printing press did change civilization. Nor will computers in the
21st century acquire a mind and inflict evil on humanity, because
computers can't think any more than parchment or books can think.
But the information age will change civilization.
The salient harm that the silly "artificial intelligence" trope will
do to humanity, aside from the general stupidity the concept fosters, is
that it will distract us from the astonishingly potent transformation
of our civilization that we will bring about in the information
revolution. The transformation will be much more radical and rapid than
the transformation in the 15th century caused by the printing press.
Within a century or two after Gutenberg, millions of people had read
things they had never read before, and thought of things they had never
thought of before, and doubted and believed new things and found new
ways to change their lives and their cultures. The Renaissance flowered,
the Reformation raged, the Enlightenment (however misnamed) bloomed,
and modernity dawned.
By 1648 northern and central Europe was bled white and a third of the
population of Germany was dead from famine and war. By 1789 Napoleon
was studying his schoolbooks. By 1867 Marx had a publisher for Das Kapital, and by 1925 Hitler published volume one of Mein Kampf.
Parchment and books and computers are the tools -- merely the tools -- by which humanity transforms itself.
The information revolution will leverage human intentions and
mistakes in ways we can only begin to imagine. None of the
transformation will have anything to do with science fiction stories
about malevolent robots. It's the malevolent humans -- and even the
well-intentioned humans -- who will fashion our ends
Artificial intelligence is an oxymoron. Only human beings have intelligence. We use tools to bring about our ends, and the human
information revolution made possible by our tools will transform our
civilization, for better or worse and probably both. But the only real
threat "artificial intelligence" poses is that it disposes us to dread HAL
when we should be contemplating the transformation -- a transformation
far more fundamental and astonishing than writing or the printing press
-- that humanity will bring upon itself via the information revolution.
René Girard has a few thoughts about what we do to ourselves.
No comments:
Post a Comment