Monday, March 06, 2006

Predicting the Oscars, and other tales of the unpredictable

Instant messaging was made for times like this:
Kate: Dude this is your girlfriend

This morning I found out Crash won best screenplay

and that's not all

BEST FUCKING PICTURE

Why should I be a screenwriter at all? What's the point? the world's gone crazy

I'm going for a run at the hippodrome to keep from imploding

On the way there, I hope a car runs over me

Perhaps this would be a bad time to remind her that I bet on Crash and shorted Brokeback. (I also bet on long shots Capote for picture and David Straithairn for actor.) I expected Brokeback to win, but the odds people were accepting to short it were just too good not to take a chance.

Why did I like Crash's odds and not Brokeback's? In a five-way race, the winner only needs a plurality of the vote, and the vote can get split all kinds of ways. Theoretically, a film or actor can win with just over 20% of the vote; in practice, winners probably get more like 35 or 40%. When one film draws all the voters of certain stripes, as Titanic surely did, the winning percentage is likely over 50%. But when the genre, politics and highbrow-lowbrow level of the five nominees is as similar as it was among Brokeback Mountain, Crash, Good Night and Good Luck, Munich and Capote, the vote is split all over the place and the winning percentage is likely to be lower.

A few weeks ago, Seed Magazine profiled a statistician trying to predict the Oscars. He clearly didn't get the game theory implications of the way Oscar voting works:

But Iain Pardoe, a statistician in the department of decision sciences at the University of Oregon, thinks Brokeback Mountain is pretty much a sure bet for best picture—a 91.1% lock, to be exact.

Evaluating various factors, such as previous Oscars won by directors or actors, how many total nominations a particular movie receives and the results of other award ceremonies like the Golden Globes, Pardoe has created an algorithm to predict who will win in Oscar’s four major categories: best picture, best director, best actor and best actress.

He correctly fingered Philip Seymour Hoffman and Reese Witherspoon, but then again, so did the market; Tradesports.com had odds remarkably similar to the odds he calculated for their categories.

But the market was more careful about Brokeback than he was; it was trading around 75% at the time he made his prediction (though this was still inflated enough to draw plenty of short sellers).

Professor Fardoe's failure to predict the best picture Oscar shows an inherent problem in artificial intelligence: it's hard to know what data is and isn't relevant for building knowledge in a computer program or in a set of equations and algorithms. Intelligent human observers could see that the Best Picture category was inherently unpredictable this year--which dampened the odds boost that Brokeback got from winning best picture in earlier awards ceremonies, and increased the odds of the rest of the pack. But this voting structure was a less relevant factor in the past. How was Fardoe to know it needed modeling as well?

This problem recalls the strategy that Gary Kasparov used to defeat the supercomputer X3D Fritz at chess in their 2003 in game three of their four-game match, which ended in a draw, with Kasparov going 1-1-2. (See my talented friend Mickey's illustration at right.) The core strength of a computer program in chess is its ability to look many moves ahead, and choose the move which, provided the opponent plays optimally, will lead to the best outcome. But if no strong move is going to be available for many turns, then the number of look-ahead turns the computer is able to do is effectively reduced. A human who can see that strategy hinges on a particular move taking place far in the future doesn't have this problem; he or she can focus on subtle differences in positional value that take advantage of the inevitability of this eventual move.

The board in Game 3 of X3D Fritz vs. Gary Kasparov, at the point when Fritz's operators resigned
Kasparov beat Fritz by dividing the board with a wall of pawns, so that its ability to look ahead was made less relevant than Kasparov's ability to judge the positional value of pieces that might not see action for a long time. This worked so well that Fritz was reduced to the chess equivalent of twiddling its thumbs--at one point, it moved a bishop only to move it right back on the next turn, drawing a loud sigh from its designers.

Like Fardoe's Oscars predicting system, X3D Fritz modeled knowledge about its world in a way that let it perform well, but left weaknesses because it couldn't recognize situations in which a different model of the data would be more appropriate. No smart, human Oscars gambler would take a 91.1% bet on Brokeback in such a splittable field.

Anonymous Greg Wayne on Tue Mar 07, 05:49:00 AM:
Interesting--though there seems to be something of a double standard with respect to "smart, human Oscar gamblers." That is, they're only smart if they predict correctly. I'm sure there were people on tradesports who are smart by many measures, yet who thought there were greater odds than 91.9% in favor of Brokeback. In evaluating human potential, we too often bias ourselves in favor of what the best humans can do; in gambling we assume the Warren Buffets of the world are rich due to their own machinations as opposed to the effects of chance distributed among the hundreds of millions of stock market players. If there were thousands of Fardoes out there, each with their own system, one of them would have gotten it right, no? I think you're probably right about the importance of the structure of the voting system, but perhaps we shouldn't discredit artificial intelligence so much as Fardoe for excluding any voting system classification in his program.
 
Blogger Ben on Tue Mar 07, 09:19:00 AM:
Good points, especially about Warren Buffet. As Michael Lewis pointed out in his excellent profile of 15 year-old stock-inflating phenomenon Jonathan Lebed, you get recognized in business if you take risks and get lucky, not if you balance risk and conservativeness ideally.

After all, even a 92% likely event only occurs 10 out of every 11 times. Perhaps this Oscars was just the 11th.
 
Blogger Ben on Thu Mar 09, 10:46:00 AM:
Kate adds:

your post which quotes me left out my best line: "i am all alone as I learn about this."