Sunday, July 11, 2010

Kasparov: the "strange sensation" of android chess

I'm compelled to excerpt at length Garry Kasparov's recent essay in the New York Review of Books in which he surveys the state of chess computing, and its implications for artificial intelligence and human-computer interaction.

It's always dangerous to draw to confident a connection between a thinker's scientific works and her politics (though I detect a consistency of mantra and inflexibility in both versions of Noam Chomsky). But it can't be a coincidence that the most prominent intellectual in modern chess is also one of the greatest democratic dissidents in Russia, a place where it is even more dangerous and lonely to be a dissident then it was during much of Soviet times.

Kasparov touches on a common complaint about artificial intelligence: that it has fails to replicate the human way of thinking. He writes:
The AI crowd, too, was pleased with the result and the attention, but dismayed by the fact that Deep Blue was hardly what their predecessors had imagined decades earlier when they dreamed of creating a machine to defeat the world chess champion. Instead of a computer that thought and played chess like a human, with human creativity and intuition, they got one that played like a machine...
Eric Siegel, a brilliant lecturer who taught me AI at Columbia, used to explain that there were four kinds of artificial intelligence, which were usually conflated into one -- to great confusion. You could get a computer to produce results that seemed human, such as the Eliza psychologist chat bot; you could get it to produce valuable insights that would not be confused in all with human ones, such as an information kiosk that is helpful to humans but never pretends not to be a machine; you could get it to be human-like in its thinking, such as systems like Wolfram Alpha, which build up knowledge using logic and building blocks of information; or you can have it be specifically computer-like in its thinking, such as a weather predictor which uses Chaos theory to detect impossibly obscure patterns.The public expected that by developing a machine whose output -- grandmaster-level chess moves -- had a quality heretofore only known among humans, researchers would be forced to develop AI that was human-like in its thinking.

There are other forms of AI than those that Siegel listed, however, and Kasparov was drawn to use his role on the main stage of AI to define and explore these.

From the article:
It was my luck (perhaps my bad luck) to be the world chess champion during the critical years in which computers challenged, then surpassed, human chess players. Before 1994 and after 2004 these duels held little interest. The computers quickly went from too weak to too strong. But for a span of ten years these contests were fascinating clashes between the computational power of the machines (and, lest we forget, the human wisdom of their programmers) and the intuition and knowledge of the grandmaster. chess, as in so many things, what computers are good at is where humans are weak, and vice versa. This gave me an idea for an experiment. What if instead of human versus machine we played as partners? My brainchild saw the light of day in a match in 1998 in León, Spain, and we called it “Advanced Chess.” Each player had a PC at hand running the chess software of his choice during the game. The idea was to create the highest level of chess ever played, a synthesis of the best of man and machine.

Although I had prepared for the unusual format, my match against the Bulgarian Veselin Topalov, until recently the world’s number one ranked player, was full of strange sensations. Having a computer program available during play was as disturbing as it was exciting. And being able to access a database of a few million games meant that we didn’t have to strain our memories nearly as much in the opening, whose possibilities have been thoroughly catalogued over the years. But since we both had equal access to the same database, the advantage still came down to creating a new idea at some point.

...A month earlier I had defeated the Bulgarian in a match of “regular” rapid chess 4–0. Our advanced chess match ended in a 3–3 draw. My advantage in calculating tactics had been nullified by the machine.

...Even more notable was how the advanced chess experiment continued. In 2005, the online chess-playing site hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.
Most sci-fi set in the future features computer intelligences which completely trump humans at solving problems, at least ones that don't require emotion. But the experiment Kasparov inspired suggests that the pairing of humans and machines might be superior at certain types of problems for a very long time. We already have some computer algorithms that farm tasks out to human minds, such as Web scraping bots that need help to decode scrambled-text CAPTCHAS, and can get it quite cheaply in the Third World. The day may come when a programmer can make a function call and specify that it use human intelligence rather than machine intelligence, and trust a system like Amazon's Mechanical Turk to farm out the task and return a result.Interestingly, Star Trek is an exception to this sci-fi rule. The computer which manages the Enterprise is powerful, but the crew never asks that it suggest solutions to problems. (This has an obvious advantage from a plot standpoint.) The android Data is a computer intelligence which goes beyond the ship's computer's limitations; in fact, he is capable of all four forms of AI that Siegel described, and he is able to suggest a possible avenue of inquiry, then tap at a computer keyboard at an inhuman pace, announce that it has some particular probability of success, and then express doubt in an unmistakably human way -- going through all four forms of AI in a single scene. The Borg, on the other hand, are a Kasparovian intelligence: rather than simply construct machine agents, they use organic creatures and link their minds together in a decentralized network with no artificially intelligent core.

and then there is Isaac Asimov's classic short story "The Last Question", which introduces an entirely new possible form of AI, which I won't give away.

Labels: ,

Blogger Donkey Hoty on Tue Jul 13, 11:13:00 AM:
I always thought "The Last Question" served nicely as a bookend to Clarke's "The Nine Billion Names of God":
Blogger Katy on Thu Jul 15, 11:51:00 AM:
What about the Star Wars droids? We've been rewatching the movies lately, and my childhood questions about the droids remain unanswered. They're capable not only of learning, but also feeling and thought. That didn't make sense to me when I was 8, and it doesn't make sense to me now.
Blogger Ben on Sat Jul 17, 07:14:00 PM:
... Or maybe just a simulation of feeling. Our abilities to project human experience onto animals and inanimate objects is boundless. R2D2 clicks and whirrs and beeps and we conclude it must be sad. That's very promising for our ability to relate to robots, and it's already being used with robots to therapeutically reach senile and autistic people.