The carmaker Tesla has been in the
news lately, and not in a good way. I’m not referring to business news about production
holdups of the sort that occur in all manufacturing from time to time (recently
in the Ford F150 line, for example), but to accidents involving the Tesla Autopilot.
By the numbers, this self-driving feature is safer than a human driver. Much
safer. Among human-driven cars in the U.S. there is 1 fatality for every
86,000,000 miles driven. For Autopiloted Teslas there is 1 per 320,000,000
miles – a 270% improvement. Yet, accidents do happen. In the past couple of months they include non-fatal collisions with a parked police car and a fire truck as well as a
fatal collision with a dividing barrier: all on Autopilot.
Tesla X interior |
Self-driving capability
is a form of Artificial Intelligence, and accidents by AIs tend to be of a
different character than those by people. People get distracted. They misjudge
distance, speed, acceleration, risks, and time. They are careless. They deliberately
take chances because of impatience or just for the sport of it. AI is great at speed/time/distance
judgments, it doesn't get distracted, and it doesn’t know how to be careless.
AI is not very good, though, at comprehending and responding to novel
situations. People can tell the difference between a road and a road hazard in
pretty much all circumstances, no matter how unusual. AIs might have trouble recognizing
the difference in unfamiliar configurations. People often are foolish
drivers, but they generally know when they are being foolish and taking risks.
AIs don’t have a clue. This means that the AI accidents that do happen probably
wouldn’t have happened had a human been driving. That more lives and property
are saved by AI drivers than are lost to them provides no comfort to the
victims. Still, we shouldn’t lose sight of the numbers, and the
self-driving systems get better and less accident-prone each year.
The tech has come a long
way since the early 2000s. In 2004 DARPA offered a $1 million prize to the winner
of the Grand Challenge, a 150 mile course in the Mojave Desert to be driven by
totally autonomous self-driving vehicles. Not a single one of the 15 vehicles
that entered the race finished it. Three never made it past the starting line
-- one of those three flipped over. In the 2005 race five out of 23 finished,
though one of those exceeded the maximum 10-hour time limit. By 2007 the tech
had advanced so much that the desert no longer was deemed sufficiently
challenging, so the competition became the Urban Challenge complete with
traffic lights and 4-way-stops. All 11 teams completed the course.
AIs, however, do not think like
people and probably never will. The limitations of AI at making “common sense”
judgments are what concern many folks not just with regard to civilian cars but
with regard to the growing number of autonomous military robots. Said UN
investigator Christof Heyns, “a
decision to allow machines to be deployed to kill human beings deserves a
collective pause worldwide." Yet, the military situation is analogous to
self-driving cars. Presumably war machines won’t be unleashed except in combat
situations, and in those circumstances they are less prone than people to
friendly fire or to misidentifying targets – and they never act in anger or
from fear. They, as The Economist noted, "have the potential
to act more humanely than people. Stress does not affect a robot's judgment in
the way it affects a soldier's." Robots, in short, are kinder: in effect,
that is. They don’t understand kindness or cruelty as such. For that they would
need consciousness – the meta-state of not only knowing but knowing that one
knows – and outside of science fiction they are far from having that.
(Whether robots ever could be better than people at love as well as
at war is a discussion I’ll leave to others, e.g. roboticist
David Levy who authored Love and Sex with Robots: The
Evolution of Human-Robot Relationships.)
As for cars, I’m not willing to surrender the steering of
mine just yet. I don’t even like to use Cruise Control even though every
vehicle I’ve bought for the past 25 years has had it. Nonetheless, it suits me
just fine if all the other cars on the road are self-driving. I would feel happier
and safer.
Radiohead – Killer
Cars
I like that they are working on this, but I don't know if I'll be able to see it in my lifetime. As we age though I can see how something like that would be very worthwhile.
ReplyDeleteI’m not confident that we ever can replicate consciousness per se, but devices are getting better at simulating it all time. Companions have much further to go than cars as is evident in RealDoll’s robots: https://www.youtube.com/watch?v=orBH_Qnw3eY
ReplyDelete