academia | advice | alcohol | American Indians | architecture | art | artificial intelligence | Barnard | best | biography | bitcoin | blogging | broken umbrellas | candide | censorship | children's books | Columbia | comics | consciousness | cooking | crime | criticism | dance | data analysis | design | dishonesty | economics | education | energy | epistemology | error correction | essays | family | fashion | finance | food | foreign policy | futurism | games | gender | Georgia | health | history | inspiration | intellectual property | Israel | journalism | Judaism | labor | language | law | leadership | letters | literature | management | marketing | memoir | movies | music | mystery | mythology | New Mexico | New York | parenting | philosophy | photography | podcast | poetry | politics | prediction | product | productivity | programming | psychology | public transportation | publishing | puzzles | race | reading | recommendation | religion | reputation | review | RSI | Russia | sci-fi | science | sex | short stories | social justice | social media | sports | startups | statistics | teaching | technology | Texas | theater | translation | travel | trivia | tv | typography | unreliable narrators | video | video games | violence | war | weather | wordplay | writing

Friday, April 27, 2018

The coming age of AI-assisted design

SketchCode is an experimental program that uses AI to turn hand-drawn sketches into simple HTML/CSS websites.

Even just interpreting a drawing and creating matching HTML is impressive, if indeed that works well. I'm skeptical, but even if SketchCode is more of a concept than an actual application, I think it points to a direction that will be hugely fruitful over the next decade.

Almost every domain where AI is making advances is one in which the space of possible interpretations is narrow enough, or basic enough, that it's only exceeding human performance by degree (in either speed or quality or both).

  1. Chess: the space of possible chess moves is generally learned by humans in a single sitting; AI is just good at considering a shitload of them
  2. High frequency trading: algorithms generally are deciding among a very small number of discreet choices of action that could be matched by a day trader with a ton of spreadsheets, if it weren't for the market's speed
  3. Self-driving cars have several dimensions of action happening at once, but that's still a narrow domain of outputs
  4. IBM's Jeopardy bot is able to avoid some complexity of its intermediary information structures thanks to the very simple form of the output, and the constraints humans are under in the game semicolon you or me with access to Google and more time and a fair shot at the buzzer beat Watson handily.
Meanwhile, many popular AI projects that have output in a much less constrained domain turn out to actually be massively human guided. For instance, the pop song writing machine learning programs. Since we don't have good generalizable tools for composing the thesis structures of pop songs, those still need to be provided by human composers.

Only at the bleeding edge of AI is it able to leapfrog what a smart adult with a little training can do slowly. Language translation qualifies, because it's gotten good enough that an adult trying to translate into a language they don't know well, even armed with several dictionaries and grammar books, gets stymied by idioms and phrases.

Coming back to HTML interface design, isn't it crazy that there aren't any super popular tools for fully designing arbitrary HTML interfaces, besides just text editors (with live preview)? Wix, Squarespace, Dreamweaver, FrontPage... They either massively limit your control over aspects of layout, or they make you quickly dive into the HTML code. I mean, when Wix added support for columns, they announced that as if it were a new feature!

In other words, HTML layout is fucking complicated and there are too many subtle aspects to be handled fully by WYSIWYG interfaces. XCode's interface builder is a complete nightmare. So if indeed this AI project is succeeding at interpreting aspects of that layout from a hand drawing, that is leapfrogging what an adult human with a little training can do without AI.

That's what's ambitious about this.

Labels:

Monday, April 23, 2018

There are dozens of reasons to fear AI, and only one mistaken reason not to

James Vincent just wrote a review in The Verge of Chris Paine's new documentary Do You Trust This Computer?, which features noted AI alarmist Elon Musk (and is being distributed for free online thanks to a donation by Musk).

The review leaves a lot to be desired. Yes algorithmic policing deserves criticism and attention... but that's a totally different film. This one seems focused on the existential risk. There's a very good chance animal life on earth will end in the next 100 years because of AI. Vincent has objections like (paraphrasing) "But a lot of people are publishing papers about possible ways to rein in AI!" I think that deeply misses the point.

There are thousands of different, independently developed AI programs running right now that play MMORPGs and are programmed to mine resources, forge weapons, seek the inhabitants of their world and kill them. Lots of these were made by teenagers. No one had to program them to be evil--they're not evil!

How far is Boston Dynamics from being able to perform all of those steps in real life? How far is the US military? There will probably be an arms race with killer robots, there will be tons of private sector research, there will be script kiddies, we already have homebrewed armed drones... and there's no real way to reliably keep software from taking actions in the real world that they would take in a virtual world.

The rhetorical task in front of the alarmist camp is merely to point out that there are many reasonable scenarios in which we inadvertently destroy the human race (and perhaps most animal life on earth) in the course of improving AI.

The rhetorical task in front of the optimists is to prove that all such scenarios that anyone could ever conceive are all unreasonable and all unlikely. I've read a lot of optimists, and I haven't encountered one that seems to be thinking about the problem thoroughly and rigorously.

They mistakenly operate from the presumption that only one body of researchers needs to come to agreement about restrictions on AI research. Or they mistakenly assume that evil is necessary for a machine to kill. Or the act is that we are still in the mainframe or personal computer era, and AI agents will be physically unpluggable. Or they misunderstand how easy it would be to kill hundreds of millions of people (just get a few dozen sheets of uranium together in various places and clap them together, or set up a network of autonomous factories that disproportionately use up the atmosphere's gases, or make tiny drones that seek and burrow into necks...) Or they are missing the aggressive direction of major power militaries towards creating software and hardware for self-perpetuating, autonomous killing machines. Or they misunderstand how opaque the inner workings of trained machine learning algorithms are.

When I hear someone say that they are certain there's no risk that this ends with the destruction of humanity (and probably most animal life on Earth), I think they are experiencing a profound failure of imagination.

Labels: , , ,

Thursday, April 19, 2018

Comey is dead wrong: mass incarceration is real

Reviewing James Comey's new memoir in The Intercept, Peter Maass focuses on an episode in which Comey complained to Barack Obama that the presidents use of the term mass incarceration was offensive:
Comey did not hide these views while at the FBI, and after making a speech in Chicago in 2015 that was not well received by the civil rights community, he was summoned to the Oval Office by then-President Barack Obama. Comey describes that session in his book, and he seemed to double down, telling the country’s first black president that the law enforcement community was upset at the way Obama had used the phrase “mass incarceration.” It was offensive, Comey told the president.

“I thought the term was both inaccurate and insulting to a lot of good people in law enforcement who cared deeply about helping people trapped in dangerous neighborhoods,” Comey writes. “It was inaccurate in the sense that there was nothing ‘mass’ about the incarceration: every defendant was charged individually, represented individually by counsel, convicted by a court individually, sentenced individually, reviewed on appeal individually, and incarcerated. That added up to a lot of people in jail, but there was nothing ‘mass’ about it.”

Maass makes a number of good points in his counterargument, including the point that more than 90% of cases on the state and federal level are settled with a plea bargain and not given their day in court.

A few more points I would add:

  1. There are hundreds of thousands of people sitting in jail, some for months or even years, who are there not because they were convicted but because they have a pending case and can't afford bail.
  2. Police departments routinely sweep up and jail groups of people with little knowledge of whether or not they have individually committed any crime, and release them without charge. (This is most visible at political protests when it happens to middle-class professionals, such as the dentist on his way to work who was arrested in New York City when police just grabbed everyone in the vicinity of a protest against the 2004 Republican convention. But it happens far more often than that.)
  3. Numerous police departments have faced revelations in recent years that their police and prosecutors routinely incarcerated people on charges they knew were false.
  4. Prison management and construction firms have lobbied, both legally and illegally, for increased imprisonment in multiple states. In Pennsylvania, a judge who secretly received a $2 million "finder's fee" from a private prison jailed kids for *years* for such offenses as having been given a bike by his parents that they bought on Craigslist that turned out to have been stolen.
  5. In numerous police departments, whistleblowers have revealed the persistence of quotas for arrests and citations.

In all of these aspects, people who did not commit crimes or who have not been convicted of crimes are incarcerated for large amounts of time due to political and economic dynamics far beyond any formal demonstration of their guilt.

Mass incarceration of the guilty is also an issue; but at the very least, any decent human being with a modicum of honesty must concede that the widespread, systematic incarceration of those whose guilt has not been determined and demonstrated by the system constitutes mass incarceration. In this country, some people get individual consideration and are presumed innocent; others are not, but rather treated a significant amount of the time as an undifferentiated mass of poor people and people of color.

Maass mentions a separate part of the book where Comey is surprisingly frank about this distinction. He quotes Comey as writing, of former general David Petraeus's slap on the wrist for lying to the FBI and mishandling classified information:

I believed, and still believe, that Petraeus was treated under a double standard based on class... A poor person, an unknown person — say a young black Baptist minister from Richmond — would be charged with a felony and sent to jail.

Comey is right about that, but of course there's even more than those two levels of standard. For many, arrested and jailed en masse, nothing close to the level of offense and amount of evidence in Petraeus's case is necessary to jail you for months or years.

Labels: , ,

Wednesday, April 11, 2018

The real problem with scientific papers

An article by Richard Goerg in the Atlantic arguing that the scientific paper is obsolete has been making the rounds.

There are too many lazy assumptions in this (commonly repeated) line of analysis to count. I do appreciate the skepticism of Wolfram that the author briefly discusses, but I wish he thought to apply it more broadly. Yes, it's definitely really cool that you can use Mathematica to do all kinds of math. But from my experience playing with Wolfram Alpha, the whole "analyze the rainfall during the battle of the Somme" stuff is way overblown. When you get away from cherry picked Wolfram-provided examples, the data sources are thin and spotty and most importantly, they presume a single universal source of truth as if we haven't learned from the past 25 years that any data or facts you assume to be true without looking at them carefully turn out to be at least somewhat false.

Bret Victor is a genius, but like Wolfram he suffers from a smugness so deep that it gets in the way of his brilliance. This research paper rewrite is a perfect example. It's great, of course--but it's great on its own and after a tremendous amount of work by an unusually brilliant programmer.

Compare the technologies the original authors used--a word processor plus some kind of spreadsheet with graphing capabilities--to the technologies Victor used--custom javascript, css, and html. How many researchers have access and capabilities to producing rapidly with the former versus the latter? How many people in the world could quickly and reliably produce the latter? How likely are the products of each to be obsolete in 5 years, 10 years, 20 years? (When was the last time you ran software from more than 10 years ago on anything but its original dedicated hardware?) How replicable/applicable is Victor's work here to other papers in a domain even a few degrees away from this particular type of graph analysis?

Yes, Mathematica points to a direction where non-programmers could produce complex interactive widgets and such, but given that centuries of person-hours by brilliant people has gone into it and it still isn't being used for that, how broadly achievable is that vision, really?

I think there's a much more fruitful direction for explanatory publishing: the clarity, and focus on what matters, that a good explainer brings when a companion asks them to explain something on a napkin at a bar. (I call this a "Gawker explanation", because Gawker was founded with the intention of telling you the New York media-related news that an informed insider would share with you off the record at a bar.) There are ideas too complex to explain this way, but most published papers aren't that complex. This can be done visually with text and images--think how easy would be to convert Victor's fun interactive sliders into static images and text that would be nearly as good (much like he already did in the bulk of that redesign)--and it can be done in the great explanatory medium of our era, video! That Victor, and this author, don't seem to seriously consider video as an explanatory medium tells you, I think, that they still have a lot of work to do in their work to improve the ways we inform others.

The replication failure crisis is a result of the medium of publishing, yes, but that medium is mostly social, economic, and institutional, not textual. These forces award surprising new findings and reinforce academic power structures, and discourage work to double check, declare null findings, and explore alternative hypotheses, and tell the truth about research labor and credit. Similarly, video explainers such as TED talks tend to be mediocre because their medium, viral social media, awards fame to speakers who can line up emotional zingers and simplistic narratives that reassure their audience by obscuring complexity and providing easy answers.

People don't tell the truth about what they do and don't understand. They create easy answers and positive narratives. They worship power and obey authority. They go along to get along. They respect confidence more than skepticism and see honest self-doubt as weak. They construct pyramid schemes of authority wherever they go. That's the problem with academic papers, not "print".

Labels: , , , , , , ,