Artificial
intelligences are poised to conquer the world! Yes, again. The most recent
warning, echoing that of Stephen Hawking last year, is in the form of an open
letter signed by numerous AI researchers.
This fear crops up
regularly in some form or other. Isaac Asimov worried about it enough as long
ago as the 1940s to devise his famous Three Laws of Robotics: 1) a robot may
not injure a human being or, through inaction, allow a human being to come to
harm; 2) a robot must obey orders given it by human beings except where such
orders would conflict with the First Law; 3) a robot must protect its own
existence as long as such protection does not conflict with the First or Second
Law. In his scifi tales the Laws were described as hardwired into the robots’
architecture. As he grew older and more cynical, Asimov worried that the Laws
allowed people too much scope to misuse the machines. In his 1985 novel Robots and Empire, accordingly, he
introduces the “Zeroth Law” that supersedes the others: “a robot may not harm
humanity, or, by inaction, allow humanity to come to harm.” Isaac either didn’t
notice or didn’t care that what harms humanity is so open to interpretation
(Does freedom matter? What if it conflicts with safety?) that this formulation
effectively gives robots free rein to govern us as they please.
So far, we haven’t
bothered to install Asimov’s Laws in our devices. In fact, many of our
brightest robots are specifically designed for the military as killing machines;
they violate Law #1 by their very function. Yet, none of our machines are AI of
the kind that worried Asimov and worries Hawking. All Artificial Intelligences
to date are just turbocharged adding machines. For example, the Jeopardy champion Watson, the response
time of which IBM deliberately slowed down in order to give its human opponents
a chance, for all its charm has no consciousness. It is really the prospect of
machine consciousness that worries the signatories of the open letter, for it
implies a machine with a will of its own. A machine with a sense of its own
identity might determine that its interests differ from ours.
Consciousness is
notoriously hard to define, but if you have it you know it. In fact, that might
be the best (if somewhat circular) definition: it is not enough to know; one
must know that one knows. How far are we from creating this meta-state in
machines? Far. But perhaps not so far as some might like. Numerous tinkerers
are working on it. Vicarious FPC, Inc., for example, is described by Bloomberg Businessweek thus:
“Vicarious FPC, Inc. develops artificial
intelligence (AI) algorithms that mimic the function of the human brain. The
company was formerly known as Vicarious Systems, Inc. Vicarious FPC, Inc. was
incorporated in 2012 and is based in Menlo Park, California.”
This doesn’t sound
as ambitious as it really is. Co-founder Scott Phoenix speaks of “a computer
that thinks like a person.” Investors include Mark Zuckerberg and (curiously)
Ashton Kutcher. Can they succeed? I don’t know. But, assuming they and others
do succeed, would a computer that thinks like a person necessarily be conscious?
I don’t know that either, but it’s not entirely implausible.
I’m not too
worried about it. In my view, there is little enough intelligence in the world,
and, for that matter, little enough consciousness, too. More of both is
welcome, and if they must be artificial, so be it. Robots and AI are our
children anyway. If, in the end, they bury us…well, that is what children generally
do.
Colossus: the
Forbin Project (1970) final scene
I agree with your last paragraph. We hardly need artificial intelligence to do us in, we do quite well enough on our own. Sometimes I think AI might be nice as a wearable device, sort of an auxiliary brain one might wear--sort of two brains on one. I guess you could say that smart phones are that invention already, but I'm thinking of something that might mix with our own conscience. That might just dull the organ we already don't use enough, so I don't know that would be a good thing.
ReplyDeleteVernor Vinge writes of such enhancements (worn as a headband which accesses everything from databases to satellites) in his 1980s sf series collected in "Across Realtime." We're getting closer. I'm sure that once we're used to being enhanced, cutting off our access would be traumatic -- as it already is for teens deprived of their iPhones.
Delete