academia | advice | alcohol | American Indians | architecture | art | artificial intelligence | Barnard | best | biography | bitcoin | blogging | broken umbrellas | candide | censorship | children's books | Columbia | comics | consciousness | cooking | crime | criticism | dance | data analysis | design | dishonesty | economics | education | energy | epistemology | error correction | essays | family | fashion | finance | food | foreign policy | futurism | games | gender | Georgia | health | history | inspiration | intellectual property | Israel | journalism | Judaism | labor | language | law | leadership | letters | literature | management | marketing | memoir | movies | music | mystery | mythology | New Mexico | New York | parenting | philosophy | photography | podcast | poetry | politics | prediction | product | productivity | programming | psychology | public transportation | publishing | puzzles | race | reading | recommendation | religion | reputation | RSI | Russia | sci-fi | science | sex | short stories | social justice | social media | sports | startups | statistics | teaching | technology | Texas | theater | translation | travel | trivia | tv | typography | unreliable narrators | video games | violence | war | weather | wordplay | writing

Wednesday, January 04, 2017

Sounding the awkward and embarrassing AI alarm

I'm irked by Maciej Ceglowski's essay "Superintelligence: the idea that eats smart people".

Like so many who roll their eyes at AI alarmists like Nick Bostrom (and me), he seems to assume that we are imbuing the AI we imagine with evil will, and assume it will be some sort of enemy.

This is actually the opposite of how Bostrom sees things. He worries that humans will be endangered as a side effect of the rise of AI and its decisions about how to make use of the matter and energy available, not because of the sort of malevolence with which we're used to thinking about danger from other intelligences. In fact, Ceglowski's mocking is a perfect illustration of the problem!

So many people don't realize just how deeply we social creatures see the world through a social lens. The sort of brakes that stop a malevolent or militaristic human from killing more than a few thousand or million people simply don't exist for computers. They don't qualitatively distinguish between killing one person and every person; nor do they have to even notice anyone's died at all.

As Bostrom points out, an AI that surpasses us in intelligence does not have to go through a stage of human-like mentality on the way to unimaginable problem-solving effectiveness. It can be something that seems curiously crippled and incomplete to us, far more alien than the parade of earthlike aliens we congratulate ourselves for imagining in our entertainment. ("What if they have... SEVEN legs! And their writing is... wait for it... blotchy ink circles! Crazy, huh?")

The case for AI alarmism, as I see it, is that AI-powered communications and robotics are going to proliferate to a degree that makes it hard to imagine there won't be many instances of effects fatal to humans. You don't need some specific, monolithic series of events for there to be existential danger. Instead, for there not to be existential danger, you need every single instance of highly intelligent AI, ever, to be limited in many crucial ways.

Self-replication plus proliferation of cheap components plus proliferation of AI algorithms equals a time when a script kiddie or a stray bug can mean every last fragile sack of meat and water gets punctured or irradiated or whatever. That's just what occurs to this limited human mind, several paradigm shifts short of understanding the full breadth of AI and microtech capabilities.

Imagine an ecological VR MMORPG with good physics simulation, with a reward for finding a way to get a self-replicating robot building AI within it to kill all the animals in its world. If it can be done eventually in in such a sim, it can probably be done in real life. If it can be done with willful human intention there, it can be done with either human intent, or nonhuman intent, here. And if it can be done with that killing as a specific goal, the killing can certainly happen as a side effect of another goal, or even just a routine glitch or programmer oversight. (And we already know that militaries will be working hard on the deliberate killing front.)

All Bostrom and other alarmists are saying is that it's very hard to see why something like this can't ever happen. That position is based on a few assumptions, I'll grant you. But Maciej and other AI skeptics are saying, confidently, that it's foolish to think it could ever happen. That position seems to assume far more, and I think their essays don't show the rhetorical care and agnosticism that Bostrom's writing does.

In a way, this debate echoes Richard Dawkins's observation that if there are 10,000 religions, even the most devout among us believes 9,999 are false. For instance, a Christian can readily see that the teapot-worshipping sect is obviously just the result of human pattern recognition and the search for meaning gone wrong. Aphrodite and Hercules are obviously just neat stories that people made up. So an atheist like Dawkins agrees with religious believers almost entirely, since he too disbelieves in those 9,999 religions; he just disbelieves in one more!

Similarly, I agree with AI skeptics that most of the specific scenarios described by AI alarmists won't come to pass; the skeptics just disbelieve in a few more. Maybe that makes me like a religious believer who thinks foremost that there is some godlike power, whatever the true mythology.

I prefer to think of it like global warming skepticism. There's still much we don't understand about the climate, and that makes it easy for climate change skeptics to mock our certainty that global warming is man-made and progressing rapidly. But informed analysis can be on firm ground in identifying a trend and general causation, even if it's still shaky on many particulars. This is especially true when that analysis doesn't claim much certainty, just a strong likelihood of meaningful danger.

Our demise won't be like a movie where the ticking time bomb works on a human timescale and always has a humanlike weakness. Comparing this threat to nuclear weapons is silly. It's more like we're on track to issue every person in the world a "kill or help between 0 and 7 billion people" button that's glitchy and spinning up 1,000 4chan chains with advice on tinkering with it. What could go wrong?

Labels: , , , , ,