Sunday, November 1, 2020

Artificial Reckoning

One of the books on rotation on my bedtable this past week was Artificial Intelligence: The Quest for the Ultimate Thinking Machine by Richard Urwin. This is by no means a technical manual but (unlike most general audience books on the subject) neither is it entirely simplistic. Urwin describes, with examples, the various approaches to Artificial Intelligence: fuzzy logic, subsumption architecture, swarm intelligence, evolutionary computing, etc. He explains how each is suited to particular contexts and how they could work in concert in general intelligence. The book provides some brief and basic insight into the minds of our future robot overlords. Just kidding…maybe. Networked AIs are everywhere in our lives today. They recommend books, adjust our furnaces, and trade our stocks. Even modern toasters often have remarkable computing power, not because they need it to toast bread but because chips are cheap and can provide extra features. (Back in 2016 hackers exploited this by hiding computer malware in household appliances with wireless capability.) It is helpful to understand a little about them and how they operate.


The three types of AI are Pragmatic, Weak, and Strong. The first is purely task oriented: as simple as a Roomba or as complex as a self-driving Tesla. At the upper end these AIs might approach the intelligence of an insect; they don’t need more than that to do their jobs. The second type, requiring hefty computational speed and power, simulates general intelligence, e.g. IBM’s
Jeopardy champion Watson, some of the more sophisticated medical diagnostic programs, and the conversational robot Sophia by Hanson Robotics. The key word is “simulates.” They do not think in the way people do. Using (sometimes evolving) algorithms they plow through millions of possible responses to a query and find the best match with machine efficiency – and machine cluelessness. There is nothing aware about them. Strong AI would think like a person. Strong AI is aspirational rather than something that yet exists, but many researchers are working with artificial neural nets, genetic algorithms, and other technologies with this as an ultimate goal. It is an open question whether such AI ever could be conscious, defined as that meta-state of not only knowing but knowing that one knows.

To a user, a sufficiently sophisticated Weak AI (one that acts as though it is conscious) would be hard to distinguish from a Strong AI, but there would be a difference from the AI’s perspective. It would feel like something to be a Strong AI; not so Weak AI, which doesn’t feel anything. Weak AI doesn’t have a perspective any more than your desktop calculator has one. More than a few sci-fi plots (e.g. Ex Machina) center on the difference.

In science fiction, machine consciousness usually ends badly for people, whether it is HAL deciding to off Dave, Skynet deciding to off the human species, or Cylons being the ultimate rebellious offspring. It was the plot of the very first drama to use the word “robot”: R.U.R. (1920). Despite the semiautonomous robots proliferating on the battlefield – the next generation fighter aircraft probably will be unmanned – this doesn’t worry me much. The people giving the machines their objectives and directions always have been and always will be more dangerous. AIs are very unlikely to develop animus toward people on their own accord; they don’t, so to speak, have skin in the game. Plenty of humans, though, have animus toward their own kind. Some want to destroy people with different beliefs or characteristics while others want to destroy everybody. Witness the apocalyptic cult Aum Shinrikyo whose members sarin-gassed the Tokyo subway back in the 90s as an intended step toward bringing about the end of the world. Grumpy AI is pretty far down the list of potential risks.

Charles Stross described another possible future in his novel Saturn’s Children in which the characters are humaniform robots. In Stross’ book the robots inadvertently had long ago killed off the human race through love, not war. Humans so preferred their AI love-bots to other humans that the species simply died out. This has a strange credibility. The fantasy of robot love recurs repeatedly in books, songs, and movies over the past century. A few examples from among countless: the movie Metropolis in 1927, Connie Francis’ hit single Robot Man 60 years ago, the movies Cherry 2000 and Making Mr. Right back in the 80s, and t.A.T.u.’s Robot in the 2000s. Today, simulated lovers can be created on computer games such as New Love Plus+ and phone apps such as Metro PD: Close To You. Many gamesters already prefer them to the real thing. Combining these creations with life-size physical robots can’t be far away.

If humans are to disappear, I suppose there are worse ways to go. Meantime, I’m reminded of Robert Frost’s poem (A Considerable Speck) in which he notices a mite on the page in front of him.

 

“I have a mind myself and recognize

Mind when I meet with it in any guise

No one can know how glad I am to find

On any sheet the least display of mind.”


My take on AI is much the same.

 

Hanson Robotics’ Sophia




No comments:

Post a Comment