Sunday, December 31, 2006

360 degrees of complacent separation

Scientists in London are revisiting Stanley Milgram's controversial 1961 experiment into obedience and cruelty. Instead of using actors to pretend to be electrocuted in another room, as Milgram did, the new experimenters deliver the fake shocks to a computer-animated person who the human subjects know isn't real.

The idea is to be less abusive to the human subjects than Milgram was. From the article on the new, animated study::
Infamous experiments almost 50 years ago discovered that ordinary people—under orders from an authority figure—would deliver apparently lethal electrical shocks to complete strangers... psychologist Stanley Milgram at Yale University [conducted] his controversial experiments in 1961, months after the trial of Nazi war criminal Adolf Eichmann began.
The Milgram experiment discovered that ordinary people could easily be persuaded to give what they believed to be lethal electrical shocks to randomly chosen strangers, even if it conflicted with their own consciences, if instructed to do so by a perceived authority figure. The stranger was at no time actually hurt.

"The line of research opened up by Milgram was of tremendous importance in the understanding of human behavior," said virtual reality researcher Mel Slater at University College London. However, it triggered a firestorm over the ethics of placing volunteers in deceptive and highly disturbing situations.
The results were alarming--famously so, thanks to Milgram's popular writing and filmmaking about the experiment. Milgram had a knack for designing illuminating social experiments--he famously conducted the "six degrees of separation" study--and for communicating the results to the public.

From Wikipedia:
Before the experiment was conducted Milgram polled 14 Yale senior psychology majors as to what the results would be. All respondents believed that only a sadistic few (average 1.2%), would be prepared to give the maximum voltage. Milgram also informally polled his colleagues, and found that they believed very few subjects would go beyond a very strong shock.

In Milgram's first set of experiments, 65 percent (26 out of 40) of experimental participants administered the experiment's final 450-volt shock, though many were quite uncomfortable in doing so; everyone paused at some point and questioned the experiment, some even saying they would return the check for the money they were paid. No participant steadfastly refused to give further shocks before the 300-volt level.
Dr. Thomas Blass of the University of Maryland Baltimore County performed a meta-analysis on the results of repeated performances of the experiment. He found that the percentage of participants who are prepared to inflict fatal voltages remains remarkably constant, between 61% and 66%, regardless of time or location.

I, for one, have never understood the outcry against Milgram's original study. Sure, the subjects might be disturbed by what they learned about themselves, but they were not forced to electrocute others; in fact they were explicitly informed that they would be paid for participating whether or not they completed the ordered tasks.

It is disturbing that some subjects became so nervous that they suffered what Milgram called "uncontrollable seizures", but all were told in exit interviews that the shocked subjects had been acting and not really shocked. The only damage done by the experiments was to the subjects' false assumptions about themselves, and I do not believe that scientists have an ethical obligation to protect subjects from truth or its emotional results. There are some people who would experience psychological hardship just from hearing an explicit description of the Milgram experiment (or, for that matter, the horrors of slavery or the Holocaust); is it scientifically unethical to even discuss the experiment in detail to a public audience?

I might feel different if the quality that was being exposed was a hard one to change. A study where subjects are ridiculed for being fat, for example, would seem unethical to me, because the subjects cannot simply decide to change. But a subject startled into realizing her own latent cruelty can resolve right away to act differently in such situations in the future.

The Wikipedia article reports that few subjects complained later of Milgram's coercion (note that this section of the article provides no attribution):
In Milgram's defence, 84 percent of former participants surveyed later said they were "glad" or "very glad" to have participated and 15 percent chose neutral (92% of all former participants responding). Many later wrote expressing thanks. Milgram repeatedly received offers of assistance and requests to join his staff from former participants. Six years later (during the height of the Vietnam War), one of the participants in the experiment sent correspondence to Milgram, explaining why he was "glad" to have been involved despite the apparent levels of stress:

While I was a subject [participant] in 1964, though I believed that I was hurting someone, I was totally unaware of why I was doing so. Few people ever realize when they are acting according to their own beliefs and when they are meekly submitting to authority. ... To permit myself to be drafted with the understanding that I am submitting to authority's demand to do something very wrong would make me frightened of myself. ... I am fully prepared to go to jail if I am not granted Conscientious Objector status. Indeed, it is the only course I could take to be faithful to what I believe. My only hope is that members of my board act equally according to their conscience...

Resorting to computer-animated characters to receive shocks means fewer unpredictable human responses like that shown above. How typical of our society (amd here I lump together the United States and Britain): we solve problems by pretending they don't exist, jump through hoops to prevent experiences we can't control, and respond to high stakes of joy and pain by ordering the difference split.

Friday, December 29, 2006

Royale with cheese

Saw the new Bond movie. It hit the right notes: great sets and scenery, exciting fights, sex and innuendo, a few hilariously cheesy lines and a plot that more or less makes sense: why can't Hollywood get this formula right more often?

(One refrigerator door question: when Bond enters his password, does he appear to hit keys that do not spell his secret word? This is doubly awkward because of an earlier dramatic moment that hinges on a keypad's requiring a ludicrously obvious text password.)

Was disappointed to see that the film used Texas Hold-'em in place of the book's Baccarat Chemin du Fer (a game I understand even less than Craps), which is Bond's signature game. Ironically, people say that the most detailed Bond film depiction of Baccarat came in the original 1967 Casino Royale film--a spoof of Bond movies starring Peter Sellers, Woody Allen and David Niven as three different Bonds, Orson Welles as the baddie Le Chiffre, as well as Deborah Kerr, Jean-Paul Belmondo, and Ursula Andress.

My biggest complaint about the film is that they do not play up what makes Le Chiffre such an intriguing character. Fleming made him a survivor of Dachau--hence his name, which means "The Number"--and a genius at mathematics, chess and other cerebral matters. We do seem him calculate a number once, but its context is a cliche and it does little to define his character. Could we see him, for example, mentally triangulate the position of a bug, or of the position of a moving elevator or car? Or even see him observe a huge amount of information at the poker table? We do seem him recite a stock market figure that he knows well, but it would be more defining if he casually calculated complex interacting fluctuations.

It's also frustrating to see films so often inflate poker hand odds so that every other deal contains a full house. Perhaps the players, or the house, is cheating? If so, they'd be wise to remember that cheating at Baccarat can ruin lives and break up royal romantic affairs--just ask Edward VII.

Tuesday, December 19, 2006

Women in science: the narrow band of acceptable behavior

The NY Times today has a piece about several recent conferences about the inclusion and exclusion of women in/from top-tier science departments. At Columbia, Harvard, Rice and CUNY, speakers offered explanations for why relatively few women who study science in college land professorships, particularly (especially?) at high-prestige schools:

One issue is negotiating skills, said Daniel R. Ames, a psychologist who teaches at Columbia University’s business school and who spoke last month at a university-sponsored symposium, “The Science of Diversity.” Dr. Ames said that when he asks people what worries them about navigating the workplace, men and women give the same answer: How hard should I push? How aggressive should I be? Too little seems ineffective, but too much comes across as brash or unpleasant.

Answering the aggressiveness question correctly can be a key to obtaining the financial resources (like laboratory space or stipends for graduate students) and the social capital (like collaboration and sharing) that are essential for success in science, he said. But, he told his mostly female audience, “the band of acceptable behavior for women is narrower than it is for men.”


Even today, [said Madeline Heilman, a psychologist at New York University], the idea that women are somehow unsuited to science is widespread and tenacious. Because people judge others in terms of these unconscious prejudices, she said, the same behavior that would suggest a man is collaborative, judicious or flexible would mark a woman as needy, timid or flighty.

The article mentions research that found that employers, prefer resumes with male names than female names. Google came up with nothing on that research, but it did point me to reports of a fascinating study by professors Marianne Bertrand and Sendhil Mullainathan that found that resumes with identical content generated 50% more responses if they carried a typically white name than a typically black one. The study was called "Are Emily and Greg More Employable Than Lakisha and Jamal? : A Field Experiment on Labor Market Discrimination" [link to abstract], which suggests that there may be illuminating findings with regard to gender as well as race, though they never summarize the results according to gender. They seem to have collaborated on many fascinating papers with appeal in the Freakonomics vein (see four interesting abstracts) are hidden behind the great ivory firewall, but this one is freely available [pdf].

Though this study doesn't directly apply to the question of gender discrimination in academic appointments and tenure, a similar form of implicit bias may be at work here. For a white man, it's sometimes hard to believe that racism, sexism and other forms of oppression persist in causing widspread damage in America; and if I believe it in word, that doesn't mean I know it to be true intimately. But evidence like this is arresting (and I encourage you to try Harvard's implicit association test yourself). Clearly, vague and seemingly inconsequential learned associations in all of us hugely limit the ability of most Americans to chart the course of their own lives.

This study only looks at the bias involved in one decision: whether to call back a prospective job applicant. But that means more than just that black applicants need to send out a few more resumes than whites do. Bias may be involved in countless small decisions over the course of hiring and working: having responded, whether to schedule an interview; having interviewed, whether to interpret energy as eagerness or aggression, and whether to hire; having hired, whether to assume little contact is a sign of dedication or of low work output; having been convinced of dedication, whether to promote.

Daniel Ames, the Columbia prof quoted above, discusses one way this kind of compounded bias dogs women:

Women who assert themselves “may be derogated,” he said, and, possibly as a result, women are less likely to recognize negotiating opportunities, and may beapprehensive about negotiating for resources when opportunities arise. That is a problem, he said, because even small differences in resources can “accumulate over a career to lead to significant differences in outcomes.”

The framework of this explanation could be applied to the question of why women turn away from science and math in elementary school, high school, and college. Compounded expectations, assumptions and differences in treatment may mean that small differences in cultural treatment day-to-day cause large differences over the course of an education. Genetic differences in predisposition for interest in math and science could still play a role, of course, and if they do exist, could help fuel a viscious circle that props up stereotypes against contrary evidence and stacks the deck against women every time they make a decision--conscious or subconscious--regarding interests, dedication and career path.

A last note -- one speaker saw a silver lining to the Larry Summers gender-fender-bender:

At the end of her talk, [Yale molecular biophysics professor Joan Steitz] displayed a chart showing rises in the proportion of women in the Massachusetts Institute of Technology faculty. There were few until the passage of civil rights legislation 40 years ago, when the numbers jumped a bit and then leveled off, she said. The numbers jumped again in the late 1990s after a report criticized the institute’s hiring and promotion practices as they related to women.

“We now have another plateau,” Dr. Steitz said, “and it’s my fervent hope that Larry Summers, God bless him, and the report that’s just come out will have this kind of impact.”

Monday, December 18, 2006

So an alien with a million bucks walks into a bar...

Boingboing recently linked to a description of a paradox called "Newcomb's Paradox", with the description:
A highly superior being from another part of the galaxy presents you with two boxes, one open and one closed. In the open box there is a thousand-dollar bill. In the closed box there is either one million dollars or there is nothing. You are to choose between taking both boxes or taking the closed box only.

But there's a catch. The being claims that he is able to predict what any human being will decide to do. If he predicted you would take only the closed box, then he placed a million dollars in it. But if he predicted you would take both boxes, he left the closed box empty.

Furthermore, he has run this experiment with 999 people before, and has been right every time. What do you do?

On the one hand, the evidence is fairly obvious that if you choose to take only the closed box you will get one million dollars, whereas if you take both boxes you get only a measly thousand. You'd be stupid to take both boxes.

On the other hand, at the time you make your decision, the closed box already is empty or else contains a million dollars. Either way, if you take both boxes you get a thousand dollars more than if you take the closed box only.
The puzzle was apparently popularized by Scientific American columnist Martin Gardner and Harvard philosopher Robert Nozick, both of whom declared that they would take both boxes; Nozick figured that since your choice can't affect the prediction the alien made in the past, your hope that the alien predicted generously and your desire to grab the extra $1k are unrelated. Franz Kiekeben, who wrote the above summary, disagrees, arguing that if you behave like Nozick, then surely the alien predicted this and didn't put the $1m in the box; therefore your only chance at the $1m is to have already been the kind of person who would take the $1m.

But there's a difference between the problem as Kiekeben originally poses it and the problem as he solves it. The original phrasing suggests that you first begin to think about the puzzle when the alien appears to you--that is, after it has made its prediction already. At that point, you are powerless to affect the alien's decision. As Nozick points out, you delude yourself if you think that your choice to pick just the $1m box will encourage the alien, in the past, to predict this. Kiekeben concludes that "Nozick and Gardner's choice to take both boxes... make them much less likely to make a million." Assuming that the alien's predictions are very likely to be correct, Kiekeben is correct that Nozick and Gardner's choice now to take both boxes would hurt them were such an alien ever to appear to them. But does that mean they should, like Kiekeben, believe that it is wiser to take just the one box? That is, should they take on a belief solely because it is advantageous (and let's hope that no aliens ever appear who reward takers of both boxes instead!), rather than because it is true?

A thread on the puzzle has attracted some excellent comments, including this from Brad Templeton, chairman of the board of the Electronic Frontier Foundation:
In the context of this problem, you do not _make_ a choice. You _are_ a choice. The alien's accuracy suggests that a human's conclusion that she is weighing two options here is a false one. From the alien's viewpoint, you are no more likely to choose differently than the prediction than you will choose to suddenly freeze because of a bizarre entropic coincidence. (It could happen, but it's really unlikely.)

So there is no paradox. Predictable beings are presented with a problem and answer it just as predicted. The only paradox is in their illusion that they might do otherwise. The trick is the question asks you to decide which choice to take, when in fact the premise says you don't have that freedom.
Taking both boxes does mean hoping that the alien predicted incorrectly, which seems like a long shot. But at this point, the degree to which you are the kind of person who takes one box or the kind that takes two is set. It may be likely that the alien predicted correctly, but it isn't absolute; as one commenter in the forum points out, you could render the alien's prediction useless by just flipping a coin. You are no more a slave to the alien's past odds than you are a miracle worker by plucking a blade of grass (because hey, what are the odds that you'd have picked that particular blade!?). You should take a deep breath, hope that you seem like the kind of person who would take just the $1m box, and take both boxes.

(Better yet, decide to flip a coin and to interpret heads as instructing you to take both boxes, and be lucky enough to have a friend who will slip a heads-only coin into your pocket without your knowledge. The alien's prediction will still be useless, but you'll be guaranteed to make the most profitable decision!)

Sunday, December 17, 2006

The abyss gazes also into you, Jeffy

The brilliant Backwards City Review blog turned me on to the "Nietzsche Family Circus", a web page that pairs a random Nietzsche quote with a random Family Circus panel. The results are ironic, and sometimes--as with my favorites here, here and here--they are so hilarious it's hard to believe no human actually created that specific pairing.

A few questions. If no human wit went into the creation of a joke, how can it be witty? A human did think to pair the two elements in the first place; does that human deserve credit for creating the joke? Or is this an unintended collaboration between Bill Keane and Nietzsche?

The mash-up concept in various media has been a fertile source of innovation, so it's frustrating that it's so hard to sort out the questions of authorship mash-ups raise. Should sampling of text and images be treated by law the way music is, and thus be subject to the veto power of authors and artists? Should it be allowed freely under a wide interpretation of fair use? And is there any standard, short of the ad-hoc system of wildly diverging judges' opinions, that can be applied to determine the difference between whole-cloth rip-off and the creation of something legitimately new?

Saturday, December 16, 2006

Biff's advice: make like a tree and get out of here!

A thought about why living and working in Georgia, the former Soviet country, was so satisfying:

As an English teacher, I was the voice of all English literature, the owner of all English idioms. Knowledge that had been valueless to date--the meaning of the cliche "a stitch in time saves nine" and its possible origins, for example--was suddenly priceless for its glimpse of the English of a native speaker, and its window into American culture, so prized there. And in my job in the presidential administration, I was not limited to my official role as a peon, but was in effect a consultant for a huge swath of political and cultural orientation: how the stock market works, what the leading trends in deregulation are, how to write a budget. I was a fount of wisdom on management merely because I'd watched enough TV and movies to absorb the conventional wisdom of the West on the subject; I was the nation's vanguard of women's liberation and gay liberation because I could describe these in cogent terms that deflated the bugaboo variants that had been handed down from Soviet times as reasons why the West was mad.

This was addictive to someone who dreamt of being influential, but feared the weight of responsibility. Here was the promised land: a place where I was important, but did not need to lead; where my knowledge was prized, but I did not need to study. It recalled for me the second Back to the Future movie, where even an idiot, armed with the common knowledge of the present and a way to travel into the past, is able to become a king. No suprise, then, that Georgia attracted a rogue's gallery of rejects from Europe and the US. Some came to start over, like a friend who has a penchant for addiction to any substance you've ever heard about; last I heard, he had gotten clean, married a Georgian, and started a successful newspaper. The rogues are not usually so impressive, however, like the erstwhile Economist writer I hired for a government writing project who was halfway done by deadline, plagiarized large sections, attributed false quotes to ministers, and threatened me when I decided not to pay him in full.

But there was also something innocently wonderful about the role I found myself in. Here I was an emissary for America, and to my surprise I found myself knowledgeable about my homeland and proud of its virtues. But even more I was affectionate for the quirks and customs of my land and language. Stories of history and word etymologies came bounding out of my mouth, and with each explanation I was able to know the story intimately as never before. Daniel Dennett theorizes that human consciousness grew out of our ability to talk to ourselves, which grew out of our ability to talk to others; here was a pattern like ontogeny recapitulating phylogeny, where by communicating to others facts I had collected I was able to taste them for the first time. What charm lay in "I've created a monster!", and what tragedy lay in the great but brief successes of racial integration into politics during Reconstruction!

The best part is, Georgia isn't necessary to bring me to this appreciation for what I know and can teach. I have taught all sorts of things in the US, and I suppose the big draw to me is the ability to hit both the notes I credited to Georgia above: the chance to be the master of knowledge, and the chance to know it as an evangelist. (I really cannot enjoy Shakespeare except as a teacher, and am not surprised that my former students were as unconvinced by King Lear as I was in high school.) But these joys have dulled in me over the years, and Georgia refreshed them as if it were a trip to the past and I was the only repository of the built-up insights of future decades.

Tuesday, December 12, 2006

Malcolm Gladwell's foresight bias

Malcolm Gladwell's essay "The Formula" (not yet online) appears in the New Yorker's media issue of Oct 16 of this year. This time, he's reporting on a group of computer-savvy entrepreneurs who have developed a grand artificial intelligence system that, he reports, can predict the earnings potential of a given film script or song.

It's an ambitious essay, the kind that sounds like it might be the basis for Tipping Point 3: The Blink-Master, and I'm glad that Gladwell reaches for big concepts; this one is entertaining and though-provoking as always. But I wish MG were not so eager a convert to novel ideas.

As is his way, when he points out the surprising successes of this month's big idea, he writes as an evangelist rather than a journalist. The prediction system, we learn, foresaw the success of Norah Jones and the Gnarls Barkley song "Crazy", as well as the middling box office returns of the Nicole Kidman-Sean Penn snoozefest The Interpreter, with astonishing accuracy:
According to the formula, the final shooting script was a $69-million picture (an estimate that came within $4 million of the actual box-office).
This is exciting, but how did the system do with other movies? You know, the ones that these geniuses--who seem to be Gladwell's only source on the quality of the pricey service they sell--didn't jump to tell Gladwell all about? According to Gladwell's vague timetable, the prediction of $69 million was made after the film's box office total was made public; he doesn't hide that fact, but in the face of his enthusiasm it's hard to keep it in mind.

In fact there's not much besides enthusiasm and unsourced anecdotes in the article to validate Gladwell's scoop. It is believable that vague, subjective artistic qualities can be accurately measured by a formula; there have been successes in this field for decades. But Gladwell gives neither context to place the difficulty of the predictive problem he describes, nor evidence that his subjects have accomplished anything at all.

It's not just that the examples are cherry-picked--by the subjects, or by Gladwell, or by both in succession--but that Gladwell gives the impression that the formula is basically mathematical. In fact it largely relies on subjective human judgment. Humans judge which results are to be considered normal for the system, and which are to be considered buggy; humans judge which of the various trials and results to report to Gladwell and to their clients; and, astonishingly, humans are even inputting their subjective quantifications of things like character development.

Here's how Gladwell explains the generation of the prediction system:
The two men... had broken down the elements of screenplay narrative into multiple categories, and then drawn on their excyclopedic knowledge of television and film to assign scripts a score in each of those categories--creating a giant screenplay report card... They could treat screenplays as mathematical propositions, using Mr. Pink and Mr. Brown's categories and scores... [emph. added]
And what are these categories and scores? First, Gladwell quotes one member of the group:"You know, the star wears a blue shirt. The star doesn't zip up his pants. Whatever." A system that generates an assessment of success from such atoms must innovative indeed. But read on:
He started with the first film and had the neural network make a guess: maybe it said that the hero's moral crisis in act one, which rated a 7 on the 10-point moral-crisis scale, was worth $7 million, and having a gorgeous red-headed eighteen-hear-old female leadwhose characterization came in at 6.5 was worth $3 million and a 9-point bonding moment between the male lead and a four-year-old boy in act three was worth $2 million, and so on...
Gladwell uses passive language ("rated", "came in at") here, possibly because that allows him to avoid overtly mentioning that the 7/10, 6.5/10 and 9/10 ratings are entirely human-generated.

This is not the promised quantitative code for a hit. When the group gives analysis to studios, it turns out, they suggest things like "better characterization" and ask for "the city where the film was set to be much more of a presence". This advice may be useful, but not more so than a producer with a good eye for public taste. The predictions are certainly no stronger than the judgment of the input values (6.5-point characterization etc.), just the type of datum that can be influenced subconsciously by other "hit" qualities, If, for example, there is a hilarious, winning scene that showcases the lead's star power, but does not develop the character deeply, can we trust that the human judges won't bump up the characterization score?

As for the accuracy of the system, any comp sci grad can code a neural network that will make amazing predictions from unrelated information. The trick is to pre-test various inputs to verify they will produce accurate results; when a group of inputs don't work well, you make up a reason why that data is biased (for example, it doesn't work with Indiana Jones-level hits because no one can predict the snowball effect of a blockbuster, but the same problem does not apply to Norah Jones, because in that case the system does predict correctly). Even better, show off the system using the same data the system was trained on. Thus trained, the innocent, mathematical formula can be applied to Butch Cassidy and Ishtar to dramatic effect, and if there is any correlation between script quality (as read by a human for hit potential) and earnings, the subjective input values can ensure the prediction is not wildly off the mark.

These tricks seem hard to pass off on observers, but Gladwell is ever forgiving. It's hard to imagine a reporter who covers social science, and who is when preparing an 8,000-word piece on amazing predictive abilities, not asking for a prediction of an event that will happen after the piece is published (say, how much Blood Diamond or The Pursuit of Happyness will make). But either Gladwell asked and was refused--and didn't mention this fishiness in his story--or, more worryingly, he just never thought to ask.

Gladwell recently excused intelligence agencies for failing to predict the Sept. 11 2001 attacks:
I do think that recognition of hindsight bias can change the way we respond to failure. It ought to make us much more accepting of the mistakes of individuals and institutions. Unfortunately, it isn't very satisfying to acknowledge the role of hindsight bias, because there is something very psychologically and politically pleasing about identifying culprits and drafting plans for reform. We need to feel that we are making progress, even when the actual prospects for progress are quite small. [emph. added]
He would do well to take his own advice. He suffers not from hindsight bias, which is thinking that it was easy yesterday to see what was coming today, but from "foresight bias"--thinking that it's hard today to see what came yesterday.

(One last transgression: in the article he gives away the ending of a great film, Dear Frankie. Skip those paragraphs and rent the movie.)

Monday, December 11, 2006

The alliterative laugh of larceny

You have to make it through to page 2 of Jack Shafer's fulminations against Ian McEwan on Slate today in order to get to the best part, the line about the "laugh of larceny." LOL! Please, please lift this line and start using it in everyday life.

Thursday, December 07, 2006

Banish drunk Mickey, and banish all the world

A great early Mickey Mouse cartoon called Gallopin' Gaucho is up on YouTube, and it accompanies well Anthony Lane's essay on Walt Disney in the current New Yorker.

Here's Lane writing of what he imagines Sergei Eisenstein liked about early Walt Disney cartoons:
There was insolence and devilry in the artwork, and a definite dash of arousal: selected portions of Mickey would stretch and squeeze, as if his entire shape were tumescent. Take “The Barn Dance,” a seven-minute hoedown of music, mutilation, and rivalry made by Walt Disney in 1928, in which Mickey Mouse takes Minnie to a dance. He keeps treading on her feet, and the more he treads the more his own feet fatten and swell, till they reach the size of anvils. By now he is stamping on her legs, one of which grows so long and thin, like a strand of black spaghetti, that she stops dancing, ties a loop in it, reaches into her bloomers, pulls out a pair of scissors, and cuts off the excess. She also takes revenge, without hesitation, by turning to a second suitor—who is huge and overbearing, with a predatory leer. The little guy, however, isn’t beaten yet. He finds a balloon, shoves it down the seat of his pants, floats over the intruder, lands in front of his girl, and starts to hoof once more. No cartoon balloon, however, has ever gone unpopped, and “The Barn Dance” closes with Mickey, deflated and re-cuckolded, gazing into the camera and weeping inky tears.
To pluck [Mickey Mouse] from that kinetic environment and stuff him into a synthetic suit, with a fixed grin and a padded ass, may be to grant him another dimension, but it is also, and more disastrously, to slow him down. Mickey ceases to be the fount of chaos; he is now a lumbering doll, made soft and safe... Hence the recent scandal, which spread across the Internet, in which employees dressed as Mickey, Minnie, Chip, Dale, and other favorites were filmed simulating sex at Disneyland Paris.
Mickey's popularity is strange. Few people in my generation have actually seen Mickey Mouse cartoons, but he's still immensely popular and trusted. So it's wild to see the unfamiliar Mickey in Gallopin' Gaucho who smokes, drinks, does a mean tango, and frowns when his sword gets droopy. Lane is rightly horrified by America's happy pasteurizing of Mickey into pablum: "It is that smoothing of rough edges which distresses the cineast, appalls the political cynic, and tempts generations of iconoclasts."

Old Mickey, we know thou dost!

Wednesday, December 06, 2006

Wadie Said, proxy warrior

The candidacy of Wadie Said, son of the late Edward Said, for a professorship is being challenged by a pro-Israel group. (There is an online petition of support for Said.) The post is at Wayne State University Law School, whose administration is figuring out very quickly that to accept Said is to enter the Israel-Palestine war by proxy.

In college I wrote about his father's similar controversies (his portrayal of his childhood in Palestine, and his stone-throwing at an Israel-Lebanon boundary--Alice edited the latter); Edward Said was a favorite lightning rod for this nonsensical shadow war, which is waged in American universities with a level of rhetoric that discredits both sides.

What is most ridiculous about the effort to prevent Said's hiring is that his ostensible sin is not even one that is generally offensive--as were Edward Said's supposed sins of lying and symbolic violence. The main group opposing him hardly accuses him of anything besides having Edward Said for a father, and being guilty of the same sin--having unpopular political opinions.

For instance, Said believes, as I do and as do many Israelis and almost the entire world, that Palestinian refugees are entitled to some form of right of return. From the press release of Stand With Us, the most prominent organizer of the effort against Said:
Supportive of his father’s legacy of “post-colonial,” “Orientalist” slander against Israel, Said advocates extremist Palestinian positions that threaten Israel’s existence. He ardently calls for the Palestinians’ “right of return.” The purpose of the “right of return” is to destroy Israel demographically.
Quite a few Israelis believe that, with so few original inhabitants alive and so many documents destroyed already, a process of return could be handled slowly, fairly, and without creating an Arab majority, but nevermind.

The press release continues:
More alarming is Said’s equivocation about “armed resistance,” which many refer to as terrorism. Pressed to clarify his views during the student interview, Said claimed that “certain types of activities, certain types of actions—armed actions—are not murder and they are not terrorism.”
Well, yes--the concept he is describing is "war". And so on.

The final sentence of the press release reads "If we minimize or ignore this issue, we will be aiding the propaganda war against Israel." This is a strange thing to consider when weighing academic merit. But in the Israel-Palestine intellectual proxy war, any accusation will do.

In An Enquiry Concerning the Principles of Morals, David Hume wrote (insightfully and condescendingly):
Disputes with men, pertinaciously obstinate in their principles, are, of all others, the most irksome; except, perhaps, those with persons, entirely disingenuous, who really do not believe the opinions they defend... The same blind adherence to their own arguments is to be expected in both; the same contempt of their antagonists; and the same passionate vehemence, in inforcing sophistry and falsehood. And as reasoning is not the source, whence either disputant derives his tenets; it is in vain to expect, that any logic, which speaks not to the affections, will ever engage him to embrace sounder principles.
I don't doubt that nationalists like those at Stand With Us are sincere in their beliefs, but some views--for example, that Israel refused to allow Palestinians to return to their homes out of self-defense rather than uncaring, or equally, that suicide bombing would vanish overnight if the West Bank settlements were evacuated and the right of return respected--simply boggle the mind. In cases like these, Hume is right that “reasoning is not the source” of the arguments; no amount of facts could sway the disputants. This is not, after all, a discussion, but a war.

Sunday, December 03, 2006

When metaphor doesn't do the work of explanation

Check out William Saletan's skeptical take on "contagious shooting" as an explanation for recent police shootings:

It's natural to grope for a rational or mechanical explanation in cases like these. But it's not clear which kind of explanation this contagion is. If it's rational, it should be judged like any rational process, and cops should be culpable for it. If it's mechanical, it should be controlled like any mechanical process, starting with the guns supplied to police. We can't keep doing what we've been doing: giving cops high-round semiautomatic weapons because we trust them not to blast away like robots, then excusing them like robots when they blast away.

Supposedly, contagious shooting was coined four decades ago to explain copycat police fire during riots. Once you start describing a behavioral phenomenon as a predictable sequence of events—"post-traumatic stress disorder," for example—people start reading it as an excuse. Seven years ago, during the Diallo case, a lawyer for one of the accused officers pointed out that "contagious shooting" was in the New York Police Department patrol guide. "I suspect that this phenomenon may play an active role in this case for my client," he told reporters.

What makes contagious shooting a handy legal defense is its mechanical portrayal of behavior. You're not choosing to kill; you're catching a disease. In the Diallo era, the NYPD patrol guide explained that the first shot "sets off a chain reaction of shooting by other personnel." Officers "join in as a kind of contagion," said the Times. They "instinctively follow suit," said the Daily News, as one shot "sparks a volley from other officers." On Monday, the Times said contagious shooting "spreads like germs, like laughter." One former NYPD official called it the "fog of the moment." Another said "your reflexes take over." A third told CNN, "It's sort of like a Pavlovian response. It's automatic. It's not intentional."

This mess of metaphors is telling. Nothing can behave like germs, sparks, laughter, fog, instinct, and conditioning all at once. That's the first clue that "contagious" is being used not to clarify matters, but to confuse them. Another clue is that the same people who invoke it often point out that the number of shootings by police is low and has been falling. An urge that's so commonly resisted can't be irresistible.
Blogger Ben on Mon Dec 04, 03:48:00 PM:
But it's not significant to his argument that police shootings are falling in total numbers. The number of shots taken after one office opens fire, per incident with multiple police present--that's what we're talking about.

Saletan is right that we can't provide automatic weapons to police and expect they won't be misused. Since every delay or safety function we add to the guns will increase the number of police killed in action, it's a question of balancing police lives against civilian lives and criminals' lives; but maybe the right combination of training, institutional culture and technology can tweak this tradeoff so it's not so costly.

The Columbia culture wars: a dead horse gets deader

I was interviewed recently for a TV news story on the "Columbia Unbecoming" controversy at Columbia University. The result is a mixed bag, especially regarding choice of interview clips: no talking head makes a convincing case, and much attention is given to irrelevant details.

The video, from the Australian program "Dateline" (no relationship to the American "Dateline"), can be viewed here until the end of 2006; click the video box on the right, and then scroll down to "8 November: Academic Freedom Battle in America". The transcript is also available.

Here's the context:

I went to college at Columbia University, and took three courses with professors in the Middle East and Asian Languages and Cultures (MEALAC) department. One course, "Israeli and Palestinian Cultures and Societies", was taught in alternating years by a Jewish and a Muslim professor, and I took it with the Muslim professor, Joseph Massad. There were lots of debates in class between students and Massad, and students and each other, but the tone was always civil. Massad was very opinionated, and made a point to respond at length to every question posed by students, but delighted in doing so calmly.

When, in the last weeks of the course, I heard that another student had found the class so offensive that she asked a dean to attend a session (the dean declined), I was completely surprised. I knew that the many pro-Israel students in the class disagreed with Massad's views, and I the class would have been more balanced if we read more pro-Zionist writing than just two books by early proponent Theodor Herzl, but I assumed from the frequent in-class debates that everyone felt the class was conducted fairly and pro-Israel points of view were being heard.

I was very wrong. In 2003, several students and faculty members formally complained that MEALAC had an anti-Israel bias. (One example of evidence cited was a petition for divestment from Israel signed by faculty members; in MEALAC, 11 out of 23 professors signed it.) The University formed an investigative committee, which concluded that there was no evidence of misconduct.

Then in 2004, an organization called The David Project Center for Jewish Leadership produced a short documentary film, "Columbia Unbecoming", that accused the Columbia MEALAC department of systemic bias against Israel and pro-Israel Jewish students. (The transcript, but not the film itself, is available online; I have asked to see the film but received no response.)

Students testify in the film that three professors--Massad, George Saliba and Hamid Dabashi, with whom I also took a course--suppressed pro-Israel views in and out of class. Most of the allegations boil down to nothing more than intense conversations--no one alleges any discrimination in grades, advising and mentoring, or seminar admittance--but a few accuse the professors of kicking students out of class or out of their office during arguments.

The University formed a second investigative committee, due largely to pressure from a huge effort in the conservative media to condemn the university. Among others, Nat Hentoff, a defender of the freedom of people to say unpopular things, chose to agree with the David Project's view that it was students' freedoms of speech and thought, and not professors', that were at risk.

I testified before the second investigative committee, in general defense of the professors. But I don't view this in the stark terms of McCarthyism that others of their defenders do. First, the committee members were patient, attentive, and curious, and seemed not to have pre-formed opinions; their published conclusion disagreed with my testimony, but did note my dissent.

Second, the testimony of students in the "Columbia Unbecoming" film may be exaggerated--I doubt that George Saliba's supposed vaguely menacing gesture really happened--but it seems sincere. As criticism of these professors' personal styles and polemical excesses, the film makes some valid points, and profesors are not so sacred that they deserve to be spared public description of their less stellar moments. It's just that it never should have been taken seriously as a critique of academic standards, by the media or by the University.

The frightening thing about this controversy is that it is only the latest in a string of baseless ados that target Columbia professors with unpopular views. (Nicholas De Genova's legitimately awful call for "a million Mogadishus" is an exception.) The University professes to be immune to the opinions of the public and its alumni, but of course it isn't, and its role as a magnet for the Edward Said-Gayatri Spivak crowd is declining.

With such high stakes, it's awkward that the professors themselves, and other students who defend them, do such a poor job of making their case. Thanks to the recent Minutemen controversy at Columbia--another case where run-of-the-mill campus disagreement was handled ineptly by the University, which shut the event down instead of ejecting the protesters--Columbia student Monique Dols is becoming a television face of the campus left. It would be hard to find a less helpful ally. In the Australian news bit, she fights fire with fire, repeating what has become an unfortunate refrain for defenders of the professors: that there were disruptive interlopers in Massad's class, and therefore Massad, not students, was the one cowed. This is ridiculous; I took the class with Dols, and don't recall anyone being purely confrontive, and anyway Massad takes pleasure in riling feathers and is more than capable of handling anyone's confontational questions.

As for the professors themselves, the controversy has succeeded in bringing out the worst in them and confirming the prejudices of their opponents. They would do much better to declare the accusations hurtful, baseless and distracting, refer doubters to their online syllabuses, invite all students and interested parties to their office hours, and say they need to get back to work. Then again, they're the ones receiving death threats, not me.

Here is an excerpt of the transcript from the Australia program; my apologies that more informative, and flattering, clips from my interview were not chosen. And for the record, I suggested to a Zionist former classmate that he give an interview too, but he declined.

News of the film sent New York's tabloids into a frenzy. They described Columbia variously as: “Poison Ivy”, “Hate-U on the Hudson”, and teaching “Hate 101”. Prominent amongst those accused in the film of intimidating students, Joseph Massad.

ASSOC. PROFESSOR JOSEPH MASSAD: Remember, as soon as this defamatory film had been released, a member of Congress in New York immediately called on Columbia University to fire me, the editorial board of two newspapers in New York also called on Columbia to fire me, there was a special meeting of the New York City Council about the situation of alleged intimidation that pro-Israel Jewish students had been subjected to at Columbia.

Some of Professor Massad's students, like Monique Dols, leapt to his defence.

MONIQUE DOLS, COLUMBIA STUDENT: We often had people who were outside of the class come into the class and disrupt the class and so on how so? Well, they would sit in the back of the room. They said they were auditors but they were clearly there to comment and disrupt Professor Massad's class. So if anyone was being intimidated, it was the professors like Professor Massad.

In Brooklyn, I track down another of Joseph Massad's former students, Ben Wheeler.

BEN WHEELER, FORMER COLUMBIA STUDENT: I'm Jewish. And when I was in Hebrew school, growing up, the very simple description of Israel and Palestine that I heard was that there was an uninhabited desert wasteland, and after the Holocaust the Jews needed a homeland and this land was available and so they made the desert bloom. And there was sort of no mention of the Palestinians. And if there ever was, it was essentially that, yeah, there are some people who aren't happy about it because they don't like Jews.

Ben says Joseph Massad did present a pro-Palestinian position but he did not browbeat students.

BEN WHEELER: There was certainly a mixed reaction among the Jewish American students. I think that a lot of students were offended by the particular facts that Massad was choosing to focus on and talk about, many of them, partially because they hadn't known those facts. And I think a lot of students were surprised or shocked to hear the criticism of Israel that was more intense and more cogent than perhaps they were expecting.

But 'Columbia Unbecoming' and the media campaign orchestrated by the David Project, had its impact. University President Lee Bollinger, who declined to be interviewed for this story, initiated an investigation.

ASSOC. PROFESSOR JOSEPH MASSAD: I think this was a terrible precedent. The only precedent I can think of that is similar would be during McCarthyism when university would crack down and set up committees to investigate professors for their political views. I was very saddened to see that the administration had in fact cooperated with these outside forces against its own professors and its own faculty.

The investigation cleared the professors, with one minor exception for Joseph Massad. It was found he had threatened to banish a student from his class for defending Israeli military tactics. It's an allegation he continues to deny - 20 students present in the class have signed a letter saying the allegation is "unequivocally false".

ASSOC. PROFESSOR JOSEPH MASSAD: So basically they threw a morsel for the right-wing forces that were besieging the university. And instead of standing by academic freedom, instead of saying this is clearly a sham of an allegation, that there was not an element of truth to it.... This was not actually a legal procedure. This was a committee, in my opinion a harassment committee, an inquisition committee, that did not give, sort of, the right of habeas corpus or due process to the accused.

Friday, December 01, 2006

Public intellectual's public intellectuals

Steven Johnson posted the syllabus for his course on public intellectuals. Here's the distilled reading list:
  • Said, Representations Of The Intellectual
  • Chomsky, 9-11
  • Frank, What’s The Matter With Kansas
  • Buckley, God And Man At Yale
  • Johnson, Everything Bad Is Good For You
  • Orwell, “Inside The Whale.”
  • Hitchens, from Why Orwell Matters
  • Hertzberg columns from The New Yorker
  • Buckley, God And Man At Yale
  • Bloom, from Closing Of The American Mind
  • E. O. Wilson, from Sociobiology
  • Gould, from The Mismeasure Of Man
  • Gleick, from Chaos
  • Jacobs, from Death And Life Of Great American Cities
  • Putnam, “The Strange Disappearance of Public America.”
  • Postman, from Amusing Ourselves To Death.
  • Sontag, “Notes On Camp.”
  • Gladwell, “The Tipping Point”, “The Naked Face”
  • Wright, “Two Years Later, A Thousand Years Ago,” “The Big Idea.”
  • Excerpts from;
What would you add?