By Elaine Schattner, MD|September 7th, 2014
Please follow my new posts at Forbes!
Thank you for your readership, comments and support,
(what follows here at ML will be old posts, rotated occasionally):
By Elaine Schattner, MD|March 15th, 2012
Last week the Annals of Internal Medicine published a new report on how doctors (don’t) understand cancer screening stats. This unusual paper reveals that some primary care physicians – a majority of those who completed a survey – don’t really get the numbers on cancer incidence, 5-year survival and mortality.
An accompanying editorial by Dr. Virginia Moyer, a Professor of Pediatrics and current Chair of the USPSTF, drives two messages in her title, What We Don’t Know Can Hurt Our Patients: Physician Innumeracy and Overuse of Screening Tests. Dr. Moyer is right, to a point. Because if doctors who counsel patients on screening don’t know what they’re speaking of, they may provide misinformation and cause harm. But she overstates the study’s implications by emphasizing the “overuse of screening tests.”
The report shows, plainly and painfully, that too many doctors are confused and even ignorant of some statistical concepts. Nothing more, nothing less. The new findings have no bearing on whether or not cancer screening is cost-effective or life-saving.
What the study does suggest is that med school math requirements should be upped and rigorous, counter to the trend. And that we should do a better job educating students and reminding doctors about relevant concepts including lead-time bias, overdiagnosis and – as highlighted in two valuable blogs just yesterday, NPR Shots and Reporting on Health Antidote – the Number Needed to Treat, or NNT.
The Annals paper has yielded at least two unfortunate outcomes. One, which there’s no way to get around, is the clear admission of doctors’ confusion. In the long term, this may be a good thing, like admitting a medical error and then having QA improve as a consequence. But meanwhile some doctors at their office desks and lecterns don’t realize what they don’t know, and there’s no clear remedy in sight.
Dr. Moyer, in her editorial, writes that medical journal editors should carefully monitor reports to ensure that results aren’t likely misinterpreted. She says, in just one half-sentence, that medical educators should improve teaching on this topic. And then she directs the task of stats-ed to media and journalists, who, she advises, might follow the lead of the “watchdog” HealthNewsReview. I don’t see that as a solution, although I agree that journalists should know as much as possible about statistics and limits of data about which they report.
The main problem elucidated in this article is a failure in medical education. The cat’s out of the bag now. The WSJ Heath Blog covered the story. Most doctors are baffled, says Fox News. On its home page, the Dartmouth Institute for Health Policy & Clinical Practice links to a Reuters article that’s landed on the NIH/NLM-sponsored MedlinePlus (accessed 3/15/12). This embarrassment further compromises individuals’ confidence in doctors they would and sometimes need rely on.
The second, and I think unnecessary, problematic outcome of this report is that it’s been used to argue against cancer screening. In the editorial Dr. Moyer indulges an ill-supported statement:
…several analyses have demonstrated that the vast majority of women with screen-detected breast cancer have not had their lives saved by screening, but rather have been diagnosed early with no change in outcome or have been overdiagnosed.
The problem of overdiagnosis, which comes up a lot in the paper, is over-emphasized, at least as it relates to breast cancer, colon cancer and some other tumors. I have never seen a case of vanishing invasive breast cancer. In younger women, low-grade invasive tumors are relatively rare. So overdiagnosis isn’t applicable in BC, at least for women who are not elderly.
In the second paragraph Dr. Moyer outlines, in an unusual mode for the Annals, a cabal-like screening lobby:
…powerful nonmedical forces may also lead to enthusiasm for screening, including financial interests from companies that make tests or testing equipment or sell products to treat the conditions diagnosed and more subtle financial pressures from the clinicians whose daily work is to diagnose or treat a condition. If fewer people are diagnosed with a disease, advocacy groups stand to lose contributions and academics who study the disease may lose funding. Politicians may wish to appear responsive to powerful special interests…
While she may be right, that there are some influential and self-serving interests and corporations who push aggressively, and maybe too aggressively for cancer screening, it may also be that some forms of cancer screening are indeed life-saving tools that should be valued by our society. I think, also, that she goes too far in insinuating that major advocacy groups push for screening because they stand to lose funding.
I’ve met many cancer agency workers, some founders, some full-time, paid and volunteer helpers – with varied priorities and goals – and I honestly believe that each and every one of those individuals hopes that the problem of cancer killing so many non-elderly individuals in our society will go away. It’s beyond reason to suggest there’s a hidden agenda at any of the major cancer agencies to “keep cancer going.” There are plenty of other worthy causes to which they might give their time and other resources, like education, to name one.
Which leads me back to the original paper, on doctors’ limited knowledge –
As I read the original paper the first time, I considered what would happen if you tested 412 practicing primary care physicians about hepatitis C screening, strains, and whether or not there’s a benefit to early detection and treatment of that common and sometimes pathologic virus, or about the use of aspirin in adults with high blood pressure and other risk factors for heart disease, or about the risks and benefits of drugs that lower cholesterol.
It seems highly unlikely that physicians’ uncertainty is limited to conceptual aspects of cancer screening stats. Knowing that, you’d have to wonder why the authors did this research, and why the editorial pushes so hard the message of over-screening.
By Elaine Schattner, MD|February 24th, 2011
There’s a new study out on mammography with important implications for breast cancer screening. The main result is that when radiologists review more mammograms per year, the rate of false positives declines.
The stated purpose of the research,* published in the journal Radiology, was to see how radiologists’ interpretive volume – essentially the number of mammograms read per year – affects their performance in breast cancer screening. The investigators collected data from six registries participating in the NCI’s Breast Cancer Surveillance Consortium, involving 120 radiologists who interpreted 783,965 screening mammograms from 2002 to 2006. So it was a big study, at least in terms of the number of images and outcomes assessed.
First – and before reaching any conclusions – the variance among seasoned radiologists’ everyday experience reading mammograms is striking. From the paper:
…We studied 120 radiologists with a median age of 54 years (range, 37–74 years); most worked full time (75%), had 20 or more years of experience (53%), and had no fellowship training in breast imaging (92%). Time spent in breast imaging varied, with 26% of radiologists working less than 20% and 33% working 80%–100% of their time in breast imaging. Most (61%) interpreted 1000–2999 mammograms annually, with 9% interpreting 5000 or more mammograms.
So they’re looking at a diverse bunch of radiologists reading mammograms, as young as 37 and as old as 74, most with no extra training in the subspecialty. The fraction of work effort spent on breast imaging –presumably mammography, sonos and MRIs – ranged from a quarter of the group (26%) who spend less than a fifth of their time on it and a third (33%) who spend almost all of their time on breast imaging studies.
The investigators summarize their findings in the abstract:
The mean false-positive rate was 9.1% (95% CI: 8.1%, 10.1%), with rates significantly higher for radiologists who had the lowest total (P = .008) and screening (P = .015) volumes. Radiologists with low diagnostic volume (P = .004 and P = .008) and a greater screening focus (P = .003 and P = .002) had significantly lower false-positive and cancer detection rates, respectively. Median invasive tumor size and proportion of cancers detected at early stages did not vary by volume.
This means is that radiologists who review more mammograms are better at reading them correctly. The main difference is that they are less likely to call a false positive. Their work is otherwise comparable, mainly in terms of cancers identified.**
Why this matters is because the costs of false positives – emotional (which I have argued shouldn’t matter so much), physical (surgery, complications of surgery, scars) and financial (costs of biopsies and surgery) are said to be the main problem with breast cancer screening by mammography. If we can reduce the false positive rate, BC screening becomes more efficient and safer.
Time provides the only major press coverage I found on this study, and suggests the findings may be counter-intuitive. I guess the notion is that radiologists might tire of reading so many films, or that a higher volume of work is inherently detrimental.
But I wasn’t at all surprised, nor do I find the results counter-intuitive: the more time a medical specialist spends doing the same sort of work – say examining blood cells under the microscope, as I used to do, routinely – the more likely that doctor will know the difference between a benign variant and a likely sign of malignancy.
Finally, the authors point to the potential problem of inaccessibility of specialized radiologists – an argument against greater requirements, in terms of the number of mammograms a radiologist needs to read per year to be deemed qualified by the FDA and MQSA. The point is that in some rural areas, women wouldn’t have access to mammography if there’s more stringency on radiologists’ volume. But I don’t see this accessibility problem as a valid issue. If the images were all digital, the doctor’s location shouldn’t matter at all.
*The work, put forth by the Group Health Research Institute and involving a broad range or investigators including biostatisticians, public health specialists, radiologists from institutions across the U.S., received significant funding from the ACS, the Longaberger Company’s Horizon of Hope Campaign, the Breast Cancer Stamp Fund, the Agency for Healthcare Research and Quality (AHRQ) and the NCI.
**I recommend a read of the full paper and in particular the discussion section, if you can access it through a library or elsewhere. It’s fairly long, and includes some nuanced findings I could not fully cover here.