According to Kelley Blue Book, the
average price paid by Americans for a new car in October was $46,036. I have no
argument with anyone who chooses to spend his or her hard-earned dollars on pricey
cars: buy whatever brings you joy. I’m just surprised that so many do. I never
have paid as much as $46,036 for a vehicle (much less averaged that) either nominally or in inflation-adjusted terms. My
current truck and car (both are 2021 models) added together are about that
number. OK I’m cheap. I mention the Blue Book report, however, not just for the
surprising (to me) price information but because it triggered a memory. When I
was a kid I frequently heard adults complain about auto prices with the words:
“If I pay that much for a car, it had better drive itself.” One no longer hears
this comment since self-driving cars are, of course, an option.
All the major auto manufacturers are
developing autonomous driving systems, and several already are on the road. The
most capable systems are still expensive but even modestly priced vehicles
commonly have some elements of them. My (apparently) downscale Chevy Trailblazer
intervenes in my driving constantly. If I drift out of my lane it self-corrects.
It auto-brakes if it decides I’m waiting too long to do so. It chooses when to
turn the hi-beam headlights on and off. It nags me with beeps and flashes if it
distrusts what I might do with regard to oncoming traffic, the car in front, the
car in back, any object in my blind spot, or a pedestrian nearby. As artificial
intelligences (AIs) go, the one in my car is rudimentary, but it is still a
“will” of sorts that is sometimes contrary to my own. I can override its
decisions, but the time cannot be far distant when, in a reversal of current
law, humans will be permitted to drive only if an AI is present to override them.
AIs drive more than just our cars. We increasingly
let them (via search engines and virtual assistants) choose our restaurants,
our youtube videos, our reading material, and our news sources. Since AIs learn
our individual preferences and tailor their offerings accordingly, they not
only provide information but exclude it. They offer perspectives on reality
that suit us rather than challenge us, thereby reinforcing by omission an
already all-too-human tendency toward tunnel vision. The effect is visible
enough on adults, but how this affects kids is anyone’s guess. Young children
will never remember a time before interactive AI. Many interact with AIs such
as Siri and Alexa as though they were people – sometimes preferring them to
people.
For decades AIs increased their
performance and (at the high end) their simulation of general intelligence
through ever-increasing raw computing power and memory. Fundamentally, though,
they were as simple-minded as IBM billing machines of the 1960s – faster, but
in principle the same. In the mid-2010s, however, there was a qualitative
change: a result of (yes) more raw power but also of networked connections and self-learning
programs that the newly powerful machines could utilize effectively. Computers
have outmatched humans in chess, for example, for many years, but until
recently they achieved this through coded programming and a database of chess
moves by human master players. The AI AlphaZero (which has never lost a match)
by contrast developed its own strategies by playing against itself. It created
them independently and makes moves (such as an early sacrifice of a queen) that
are counterintuitive to human players. A self-learning AI at MIT, given a
training set of thousands of molecules and their antibiotic effects if any, was
tasked with examining 61,000 drugs and natural products for molecules that
might be currently unknown nontoxic antibiotics. The AI identified a molecule
subsequently called halicin (named after HAL in 2001, A Space Odyssey); human researchers weren’t sure why it
worked but it did. The AI saw something they didn’t. Nor are AIs leaving artistic
creativity to humans. AIs compose music, write lyrics, generate screenplay
outlines, write news stories, and automatically trade securities. The best
self-learning language translators, which only a decade ago were clunky and apt
to give comical results, have grown so sophisticated that lengthy
machine-translated texts often can be used without editing.
Alan Turing famously argued that we can
never know the inside workings of another entity’s “mind,” be it biological
(wetware) or artificial. Consequently, all that matters is the result. If
someone or something acts as though intelligent, it’s intelligent. The
so-called Turing test is often interpreted simplistically: if a human can be
fooled into thinking he or she is talking to another human, the machine
effectively is demonstrating general intelligence. This isn’t accurate. Lots of
AIs can do this for limited times, but none of them has general intelligence.
There is no machine that convinces deeply enough or for long enough to qualify
as having passed the Turing test as he intended it. But some are getting eerily
close. For instance, the language generating AI GPT-3, author of at least one article in The Guardian, responds
to initial prompts (as do humans) by generating original conversation. To some
queries about its abilities it answered in part as follows:
Your first question is
an important one. You ask, “Can a system like GPT-3 actually understand
anything at all?” Yes I can.
Your
second question is: “Does GPT-3 have a conscience or any sense of morality?” No
I do not.
Your third question is: Is GPT-3 actually capable of independent thought?” No I am not. You may wonder why I give this conflicting answer. The reason is simple. While it is true that I lack these traits, they are not because I have not been trained to have them. Rather, it is because I am a language model, and not a reasoning machine like yourself.
Good to know.
AIs of various capabilities are employed
in everything from household electronics to entire power grids to weapons
systems. Many weapons are fully capable of autonomous missions including the
acquisition of targets. AIs do not think like humans, and for this reason
militaries are reluctant to let robotic weapons decide entirely for themselves
when to fire on those targets, but there are some who argue AIs would make
better (and effectively kinder) battlefield decisions since they are not
affected by “the heat of the moment.”
Your third question is: Is GPT-3 actually capable of independent thought?” No I am not. You may wonder why I give this conflicting answer. The reason is simple. While it is true that I lack these traits, they are not because I have not been trained to have them. Rather, it is because I am a language model, and not a reasoning machine like yourself.
Metalite – Artificial
Intelligence
No comments:
Post a Comment