academia | advice | alcohol | American Indians | architecture | art | artificial intelligence | Barnard | best | biography | bitcoin | blogging | broken umbrellas | candide | censorship | children's books | Columbia | comics | consciousness | cooking | crime | criticism | dance | data analysis | design | dishonesty | economics | education | energy | epistemology | error correction | essays | family | fashion | finance | food | foreign policy | futurism | games | gender | Georgia | health | history | inspiration | intellectual property | Israel | journalism | Judaism | labor | language | law | leadership | letters | literature | management | marketing | memoir | movies | music | mystery | mythology | New Mexico | New York | parenting | philosophy | photography | podcast | poetry | politics | prediction | product | productivity | programming | psychology | public transportation | publishing | puzzles | race | reading | recommendation | religion | reputation | RSI | Russia | sci-fi | science | sex | short stories | social justice | social media | sports | startups | statistics | teaching | technology | Texas | theater | translation | travel | trivia | tv | typography | unreliable narrators | video games | violence | war | weather | wordplay | writing

Tuesday, November 14, 2017

Silicon Valley doesn't understand what's actually hindering medicine

Pam Belluck has a piece in the NYTimes, First Digital Pill Approved to Worries About Biomedical ‘Big Brother'.

I was just thinking today about how much I don't buy this whole "non-compliance costs so much" analysis of medicine. 

I've seen a ton of doctors in the last 10 years, and I only wish medicine were at the point where it was being dispensed sagely and the main problem was patients not following recommendations. 

The types of things that get in the way are much more low-hanging fruit: doctors not paying attention, doctors who do the same exact thing no matter who walks in the door or what they say, doctors who see a patient for a less than 2 minutes, doctors not actually examining patients who report complex and specific symptoms, doctors failing to record any notes, doctors prescribing X-rays and more expensive scans and being vague or scribbly about which body part is supposed to be scanned and technicians getting it wrong, technicians casually asking patients questions at the last minute that determine thousands of dollars of scans or other care and which the patients aren't equipped to answer, doctors unknowingly prescribing medicines that the patient has already tried because the doctor never bothered to ask about them, doctors writing impossible to read instructions that make their staff shrug and guess, etc. Oh, and doctors lying about their own analysis, so casually that I don't think they realize they're lying. 

All of these have specifically happened to me. And I'm not seeing an especially bad assortment of doctors; that's just the state of our race-to-the-bottom medicine. I suspect this is how many or most doctor's visits work in the real world, at least in NYC/New England.

I've been prescribed tons of different things over the years, none of which, to my knowledge, helped me (ok, I do appreciate the pain relievers I had after surgery). Monitoring my compliance with these drugs would not have added any value. On the other hand, challenging doctors to have coherent reasons for their medical decisions, and challenging them to be up on medical literature and which medications are helping people with which symptoms, would have.

I'm not saying I'm a typical patient, and I know there's plenty of Dotty old folks who must drive doctors crazy with their refusal to reliably take life-saving drugs. But when I read this sort of technocratic, Silicon Valley take on medicine, I feel like it bears no resemblance to the actual medicinal practice I see.

A friend responds:
I agree that medicine as it is is still way more imprecise than we tend to believe.

It's definitely in line to be destroyed by big data/machine learning soon, it will happen in no time in fields like radiology.

Sorry but that's exactly the attitude I think is wrong!

The idea that machine learning can revolutionize medicine assumes that it's a problem like voice recognition or driving -- a matter of taking a narrow task and incrementally improving until you surpass human performance. There are tricky questions about whether AI in those domains can get from 95% of the quality of a sophisticated human performer to 100%+, but presumably they'll get there fairly soon, because 99.9% of the information they need to make the correct decisions is available to them within discrete parameters, and able to be fully backtested.

Instead, I think medicine is more like op/ed writing: you can be extremely knowledgeable and still produce worthless work, by failing to understand the nature of the problem. What's the set of information you need to provide to a piece of software for it to write an intellectually curious opinion piece on a particular topic? I really don't even begin to know.

If you take a fairly narrow domain like discerning the potential of a tumor to be malignant, I grant that machine learning *assistance* is very likely to be capable of helping inform better human judgment. But even there, the constraints of the technology may occlude as much as they illuminate. To even choose to use a given piece of software is to accept, however provisionally, that its preexisting discrete domain is relevant. I think that is very likely to guide away from correct analysis in many cases.

Just look at all of the studies that find bias in doctors' diagnoses: if having competing doctors in your specialty nearby can make you prescribe more expensive procedures, or if being taken on a junket can make you prescribe the sponsor's medication more, or if you're a researcher and your lab is able to transcend a double-blind to produce results that make a splash but can't be replicated, just think what it will do to quality of care to have a set of domain specific diagnosis bots at your disposal. You may even be ordered by insurance companies or your hospital to use them and to abide by their judgment.

The x-ray technicians who began to take an x-ray of my ribs were operating perfectly correctly given their input: they were told by their front desk that my doctor had ordered an x-ray of my ribs, when he had ordered an x-ray of my *wrist*. Put aside the obvious fact that this particular problem is not the kind of problem it's likely for a computer to make; my point is that they were wrong to trust their input, and should have been following procedures to ensure quality in ways that question their input itself. That's completely routine for a well-functioning, intelligent human. But it's a total mess for fragile performers like doctors and AI.

Labels: , , ,