Last night I finished Children of the Sky, Vernor Vinge’s sequel to his marvelously imaginative sci-fi novel A Fire Upon the Deep, set largely on a world where packs of telepathic canines are intelligent life-forms – the intelligent individuals are not the “dogs” one-by-one (they’re just dogs, pretty much), but the packs. I’d recommend Fire to anyone, but Children only to the hardest of hardcore Vinge fans. Nevertheless, it brought to mind something else for which Vinge is well known, even among those who are not fans of his novels.
It has been 20 years since Vinge presented his paper, The Coming Technological Singularity: How to Survive in the Post-Human Era. The singularity refers to the emergence of superhuman intelligence – aware intelligence – after which we will enter “a regime as radically different from our human past as we humans are from the lower animals.” He predicted the moment would arrive after 2005 but before 2030. He hasn’t yet altered that prediction. After the singularity, the future belongs to post-humans, who might or might not be just technology-enhanced humans.
Artificial Intelligence is an old concept. Self-aware computers and robots have been a staple of sci-fi almost from the start; Maria in Metropolis, HAL in 2001 and Colossus in The Forbin Project are obvious examples. Vinge was not the first to suggest that a bio-cultural sea-change would follow the appearance of true AI, nor did he invent the term “singularity” to describe it. As Vinge notes in his 1993 paper, early computer scientist von Neumann (1905-1957) used the term in a similar context. However, Vinge popularized the idea more successfully than anyone before him, and most discussions of the singularity begin with Vinge. He notes that there are several routes by which superintelligence can arrive:
"--There may be developed computers that are 'awake' and superhumanly intelligent…
--Large computer networks (and their associated users) may 'wake up' as a superhumanly intelligent entity.
--Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
--Biological science may provide means to improve natural human intellect."
The last option recalls the eugenics movement popular in intellectual circles a century ago. Even with the growing potential of genetic engineering, however, the inside track presently seems to belong to electronic/photonic technology. Already, an otherwise ordinary person connected to the internet can ace an IQ test, even though the credit for that belongs to his hardware rather than his wetware. Google glasses (demo below), scheduled for limited market release this year, offer a continuous internet connection with a heads-up display. This comes close to option #3.
The ultimate game-changer, though, would be true machine intelligence. Sci-fi is full of Terminators and other evil intelligent machines bent on destroying humanity, but nothing of the sort need be the case. Machines will have whatever values we give them. (On second thought, there is something scary about that.) A sci-fi novel worth a read is Saturn’s Children (2008) by Charles Stross. Humanity in his future solar system has ceased to exist, not because it was destroyed but because it just faded away; humans didn’t see the point of biological reproduction when their robots were so superior. The robots retain their human-like forms and values even though they make little sense in a universe with no more people, because to change would alter their identities – the essence of what they are.
Does tomorrow really belong to our souped-up tinkertoys? Maybe. Perhaps that is for the best, too. If the idea makes you uncomfortable though, you are not alone. Vinge himself remarks, “I think I'd be more comfortable if I were regarding these transcendental events from one thousand years remove ... instead of twenty.”
Google Glasses Demo
HAL-9000 Would Rather Do It Himself in 2001: A Space Odyssey (1968)