What's wrong with technology testing statistics

Medical testing serves as a warning about technology benefit claims.

I remember seeing an ad for some energy saving measure that claimed it saved a “small group” of users $500 a year, without specifying how many people were in the small group, or what the distribution of savings in the group was. Obviously, there are a lot of claims made about how various new technologies “fix” problems and I’m skeptical of most such claims. Technologists could learn from how medical testing and treatment statistics can be misleading, and whether the healthcare industry is taking advantage of the fact that most people don’t understand the statistics they publish. Are expensive tests and treatments justified?

First let’s talk about testing. Suppose a medical test has a 90% probability of detecting a condition in somebody who has it. (That sounds pretty good.) Also suppose it has a 10% probability of detecting a condition in somebody who doesn’t have. (That doesn’t sound too bad either.) Now suppose that the condition only occurs in 10 of 100 people you test. (Ok, that’s serious. We need to do something.) Among the 10 people who actually have the condition we find 9 test positive and 1 tests negative. Among the 90 people who do not have the condition we find 81 test negative and 9 test positive. So only half the people who test positive actually have the condition. (Hmm, that’s not as good as we’d like, so let’s not publish that fact.)

Conclusion: a 90% accurate test for a condition that occurs in 10% of the population is wrong 50% of the time it gives a positive result. So if you get a positive test result, it’s really just a coin toss whether you are actually sick in this scenario. (Note: in real life, these numbers would be different, possibly very different. But if you don’t know how to do the math, what should you do when you get a positive test result?)

Do you see my point? The test accuracy numbers looked really good at first. But when we run the numbers, it turns out they don’t look so good. The accuracy statistics were very deceptive.

Now consider treatment. Suppose that we treat those 18 positive people with a procedure that cures 2 out of 3 with people the condition but sickens 2 in 9 who don’t have the condition. (3 to 1 doesn’t sound bad.) Then after treatment of everyone who was positive we have 5 sick people and 13 well people. I’m sure the pharmas would claim the treatment is 72% effective (which is more than ⅔, so that’s great news). But, notice that 40% of the people sick after treatment were made sick by the treatment because they didn’t have the condition in the first place. Hopefully the 2 people sickened are not as sick as the 3 cured would have been, but that’s hardly comfort to those are weren’t sick to begin with.

Here again, the treatment performance numbers looked pretty good at first. You tell me I have a 72.2% chance (higher precision sounds more convincing) of being cured and of course I’ll undergo treatment. But if you told me that for every 3 people cured, 2 people are made sick who weren’t, then that doesn’t sound so good. Here again, see how the numbers can easily be made to lie.

I’m not going to say all modern medicine is a scam. That would be totally irresponsible. But I am going to say that a lot of people are ill-equipped understand the information they’re given by doctors, pharmaceuticals, and governments. (This is especially true when we are emotional and upset by the news.) It’s not easy to tell the difference between government-approved treatments that are not justified based on the math and genuinely valid treatments that truly are safe and effective. It’s especially hard if they know the real numbers and they’re trying to hide the truth from us.

The take-away lessons are (1) it’s going to be easy for the technology creator to convince people that a solution is worth buying, and (2) it’s going to be hard for a consumer to evaluate whether the technology is actually going to provide any benefit.

Written on October 21, 2017