In an optimistic op-ed piece in the May 1 Wall Street Journal, “Will Medicine Ever Make Up Its Mind?” Thomas Goetz, an editor at Wired and author of The Decision Tree, considers the evolution of medical knowledge that might or should inform health care decisions.
It seems like there’s an endless series of contradictory health findings, he writes:
“But here’s the thing: As frustrating as these shifts can be in isolation, taken together they reflect an effective system. Every revision and new recommendation is an attempt to put forward the best available information.
Medical science will always be a moving target, and it will always be an unfinished process…We look to science to get as close to that truth as possible. This is why medicine will always be rooted in risks and probabilities…
Not to worry, he suggests. While statistics can make us uncomfortable, some researchers at Dartmouth have demonstrated that we can, in fact, handle numbers. Ultimately, a data-driven, evidence-based system will bring us closer to the truth and deliver better care.
OK, so let’s say I agree with this utopian vision of medical informatics. (I do, at least in principle.)
What concerns me is the potential power of data, which can be coercive, misleading and even dangerous if its limits are not fully appreciated or understood by those who use it.
For example, if “studies show” that drug X is better than drug Y for condition Z, then we should choose drug X for condition Z, right? Maybe, or maybe not –
I don’t intend to be simplistic here. Goetz’s very point is that new findings can be wrong and often are. Knowledge evolves through trial and error. It’s the conglomeration of findings, by its aggregate nature more persuasive and powerful, that can be scary-wrong for a decade or two and thus do harm.
So what if it turns out that each of the small studies on which a consensus emerges is flawed and the direction of science heads backward? In some lines of investigation when search for truth meanders and then moves ahead, I wouldn’t worry much. Say there’s some quivering on the validity of string theory – whether it holds or not. As much as I’d like to have a better grasp of that topic, I think it unlikely that going back-and-forth in that sphere will affect the health of many people.
But in the case of medicine the consequences of decisions include life and death and everything in between (pain, living the rest of one’s life on a ventilator, paralysis, losing one’s virility, deafness, blindness and good effects, too – like pain relief, survival, being able to walk again, etc.)
My 20-year stint in academic medicine gives me a certain perspective on this issue, which is that sometimes scientists and researchers in medicine don’t exactly provide an objective or unbiased representation of their findings. And, of course, anyone who reads the newspaper now knows are many undisclosed ties out there between the pharmaceutical companies, the institutions and individuals that analyze their products, and the doctors who might prescribe those.
It was just two weeks ago that the Council of Medical Specialty Societies (CMSS) released a voluntary code of ethics regarding those groups’ interactions with commercial enterprises. The council of societies, according to its website, encompasses more than physician specialty groups. Based on the list as it appears on-line today (May 3, 2010) it looks like 14 groups, including the American Society of Clinical Oncology (ASCO) and the American College of Cardiology (ACC) have signed on, but the majority haven’t yet agreed or, perhaps for administrative reasons, haven’t yet registered their agreement to the code. (I recommend to my readers the Carlat Psychiatry Blog on this announcement.)
The societies are influential because they publish most major medical journals. Those journals, still, are loaded with ads from pharmaceutical companies and their editors, human like the rest of us, may not have sufficient self-awareness, conscientiousness or plain insight to distinguish sound information from bad. Academic leaders serve on government panels that help establish what’s evidentiary in medicine, and they usually do all of this with good intention. Still, they may not be right.
The problem, as I see it, is that evidence is only as good as the quality of the research that underlies it. And unless each editor, professor or doctor sitting at a computer in a small rectangular room with a patient really gets the details of the studies on which he or she relies to decide between treatments X and Y, and understands the stats and the limits regarding those drugs’ use in condition Z as were reported in a journal N years ago, they’ll be treating the information as dogma, something accepted but not really understood.