Deep Learning, Health-care, and the Potential for Fraud

You’ve seen the news about Deep Learning and Health Care, right? About how image classification using these algorithms is turning out to be better than doctors (•) in identifying issues in fields like pathology, ophthalmology, and radiology? It does sound like good news from a diagnosis perspective, with more accurate results resulting in earlier intervention, better treatment options, improvement in health care, and so on.
But, there are humans involved in this process, humans who are deeply embedded in the health-care industry — the same industry that make billions and billions of dollars in profits based on a crazy-quilt of complexity when it comes to providing services (in the US at least). And that’s before we even begin to think about fraud and abuse.
Lets stick with image classification. We know that there is an entire universe of adversarial attacks, attacks that can be used to “fuzz” the original image just a little bit, and make the classification algorithm fail, or worse, make it think the image is something else (even if the human eye can, literally, not tell the difference). The issue seems to be something buried deep in the very nature of image classification too šŸ˜.
A recent paper by Finlayson, Kohane and Beam, “Adversarial Attacks Against Medical Deep Learning Systems” goes into some detail on this. They describe some of the incentives in the health care industry when it comes to to manipulating the data to mislead diagnosis (to increase profit). Mind you, this doesn’t necessarily have to be harmful, it could be edge-case stuff, where you flip the diagnosis from one side to the other.
Take the following three examples
  1. 1. Dermatology: In the US, this is largely fee for service. A bad actor could tweak the algorithm just enough, by adding noise to borderline cases, so that the results indicated that the skin condition could be threatening and needed to be excised. Of course, a really bad actor could just do it for anycondition and rake in the buckos.
  2. 2. Radiology: The difference between positive and negative results in a clinical trial can mean billions of dollars to Big Pharma. Tweaking radiology images (e.g. for lung-cancer clinical trials), to show positive results would, well, be beneficial from a profit-perspective, no? I mean, think of the money!
  3. 3. Ophthamalogy: Insurers are vested in reducing the number of surgeries that take place. By adding appropriately adversarial noise to images, they could make mildly positive images (e.g. for diabetic retinopathy) appear to be normal. Not much of a tweak, and causing minimal harm, but significantly reducing their payout.
These are just the ones that Finlayson et al. describe, there are no doubt many more that you too could come up with. And, as the examples show, each of the players — physicians, pharma, insurers — has a profit-motive associated with tweaking the data to their benefit. (••)
The obvious question is, “How do we go about dealing with this?
Algorithmic solutions are, well, not very fruitful yet. They are hard to do at scale, and as mentioned earlier, there might even be some basic Uncertainty Principle at play preventing us from getting to a “perfect solution”. On the other hand, there are some positive results that might show up in domain specific solutions (i.e., “This works for radiology, but not for dermatology”, and vice versa).
Infrastructure solutions might be the best bet here. These can range from technical (hash the raw data, and use these hashes to validate data integrity through the processing chain), to structural (have processing done by a “neutral third party”), and financial (ramp up the penalties).
The bottom line though is, as the authors put it “the massive scale of the healthcare economy brings with it significant opportunity and incentive for fraudulent behavior”, and what we need is “a meaningful discussion into how precisely these algorithms should be incorporated into the clinical ecosystem despite their current vulnerability to such attacks
(•) Ok, better than individual doctors, on average. Whatever.
(••) As far as the patient goes, well, they’re the least important person in this puzzle, right? šŸ™

Comments

Popular posts from this blog

Cannonball Tree!