Among my reviews last week was one of the
series Dollhouse, of which I had
watched several episodes. The season finale (watched last night) took an
unexpected apocalyptic turn due to misuse of technology. The plot reflects a
lingering fear. Modern technology opens up the world (and beyond) in the most
marvelous ways, but futurists have worried about where technology might lead us
since before the word “futurist” was invented (1846). Mary Shelley’s Frankenstein was published in 1818. Later
in the 19th century Verne was wreaking fictional havoc by submarine and aerial
bombardment. At the turn of the century Wells fretted about bio-terrorism, modern
chemistry (e.g. The New Accelerator
and Food of the Gods), and even
atomic warfare (The World Set Free). The warnings continued throughout the 20th as Colin Clive shrieked
“It’s alive!” in the 1931 movie version of Frankenstein,
an endless series of monsters created by radiation from weapons tests ravaged
civilization in films of the 1950s, and The
Terminator tracked Sarah Connor in the ‘80s. Also in the 1980s, Eric
Drexler warned in his book Engines of
Creation of “grey goo”: self-replicating nano-bots overwhelming everything
like a mechanical Blob. Just last year Stephen Hawking warned that artificial
intelligence poses an existential threat to humanity.
Are these kinds of fears overblown? Haven’t
they always been? [In my short story Modern
Times an early modern human is scolded for knapping a better flint
tool.] There have been plenty of industrial accidents from modern technology
including some that were regionally calamitous. Many products from laboratories
have unintended consequences: the amphetamines that doctors once handed out
like candy come to mind. The intended consequences are often scary enough: many
of the weapons imagined by 19th century scifi writers came to be. We’ve
seen what industrial war can mean. Yet, with all that, human lifespans keep
rising and life on balance is vastly safer than in pre-industrial societies. Civilization
has given itself some knocks but has not destroyed itself so far. Have we just
been lucky? There might have been one occasion when we got lucky.
In the 1920s and early 30s scientists were
teasing out the basics of nuclear architecture. Some of the most talented
physicists the world ever has known (Enrico Fermi, Leo Szilard, Lise Meitner,
Otto Frisch, Werner Heisenberg, et al.) experimented with uranium; each hoped
to create transuranic elements that don’t occur in nature by bombarding uranium
samples with neutrons. Independently, they succeeded in splitting uranium atoms
in multiple experiments but somehow missed that this was what they were doing. This
is often called the “Five Year Miracle.” These incredibly brilliant people were
looking for elements with higher atomic numbers than uranium; they weren’t
looking for fission, and so in very human fashion they didn’t see it. Not until
1939 did Hahn and Strassmann in Berlin recognize that they were seeing fission
byproducts from their uranium experiment; this in turn led them to recognize
the significance of the U235 isotope and the possibilities for the transuranic
element 94, which didn’t yet exist but later would be named plutonium. Fermi expressed
self-flagellating amazement that he hadn’t seen it all himself in 1934. If he
had, however, those five years of additional development time would have meant
that World War 2 would have been fought as an atomic war, not just by one
nation at the very end but by all sides from the beginning – unless the
prospect of mutually assured destruction deterred war altogether. Given the
actors of the day, a good outcome is hard to imagine.
Whether or not concerns about tech are
justified in a general way is a moot question of course. Even if one believes the
restraint of technical advancement to be a good idea (I don’t), there is no way
to do it without overarching global authoritarian force, which even if possible
(it’s not) is hardly a good idea either. Scifi writer Larry Niven in his Known
Space universe imagined a future in which there was just such an enforced restraint
on new “disruptive” technologies by a global earth government; this literally came back to bite them when humans encountered the unrestrained Kzinti: predatory imperialist aliens
rather like sentient tigers. I don’t think extraterrestrial sentient tigers
pose much risk, actually. People do, as we always have. In the end, however, the
tools matter less than who wields them. Tamerlane, for example, is credited
with killing some fifteen to twenty million people with nothing more than
swords and arrows; those victims probably wished they had a better weapon.
Blondie
– Atomic
Yes, it's a little of a conundrum, but I agree that it's better to stay with the advancement of technology, rather than suppress it. I don't fear technology as much as those who have command of it. For sure though it's the what ifs, and science that goes wrong that makes for some good SF.
ReplyDeleteBesides, I'll never become a cyborg without it, and that would just be sad.
Delete