Proposed Model for Evaluating False Positives in Screening Mammography

First, a definition* –

False positives happen in screening mammography when the images suggest the presence of a malignancy in a woman who doesn’t have cancer in her breast.

Here’s my proposed model –

Categories of False Positives in Screening Mammograms

False positives can arise during any of three conceptual segments of the testing process:

1. False positives occur during the test itself.

This happens when a radiologist inspects a film or digital image and labels the result as abnormal, but no cancer is present. This sort of problem is interpretive.

A common scenario goes like this – a spot in a mammography image suggests the presence of a possible tumor and the radiologist correctly notes that abnormality; later on, a doctor determines by sonogram, biopsy or another method that there is no malignancy in the breast.

(Other, uncommon problems in this category would include faulty equipment that reduce image quality, mislabeling or accidental switching of films; in principle, these kinds of errors should be non-events.)

2. False positives stem from miscommunication or misunderstanding of test results.

If a clerk accidentally phones the wrong patient and tells her she needs another procedure because the results of her mammogram are abnormal, that call might instigate an untoward, false positive result. If the error is corrected early on, so that affected woman worries only for a period of hours and has difficulty sleeping for one night, she might experience some psychological and/or small financial cost from the matter. But if the mistake isn’t caught until after she’s had a sonogram or MRI, and consulted with a surgeon or another physician, the costs grow.

False positives also arise if a patient misunderstands a test result. An essentially normal mammography report, for example, might mention the appearance of benign calcifications. Upon reading that result, a woman or her husband might become upset, somehow thinking that “benign” means “malignant.” This type of false positive error, based in poor communication and lack of knowledge, can indeed generate extra doctors’ visits, additional imaging tests and, rarely, biopsies to relieve misguided fears.

3. False positives derive from errors or misinterpretation of results upon follow-up testing.

This category of false positives in screening mammography is by far the biggest, hardest to define and most difficult to assess. It includes a range of errors and confusion that can arise after breast sonograms, MRIs and breast biopsies.

3a. false positives in subsequent breast imaging studies such as sonograms and MRIs:

Many women in their forties and early fifties are premenopausal; their estrogen-stimulated breasts tend to be denser than those of older women. Reading their mammograms may be less accurate than for postmenopausal women. For this reason, a doctor may recommend a sonogram or MRI to further evaluate or supplement the mammography images.

These two radiology procedures – sonograms and MRIs – differ and, for the most part, are beyond the scope of this discussion except that they, too, can generate false positive results. A sonogram, for instance, may reveal a worrisome lump that warrants biopsy. MRIs are more expensive and sensitive; these tend to pick up subtle breast irregularities including a relatively high proportion of benign breast lesions.

3b. false positives in breast biopsy:

A breast biopsy is an invasive procedure by which a piece of the gland is removed for examination under the microscope. Sometimes pathologists use newer instruments to evaluate the genetic, protein and other molecular features of cells in the biopsy specimen. Years ago, surgeons did the majority of breast biopsies. Now, skilled radiologists routinely do a smaller procedure, a core needle biopsy, using a local anesthetic and a small albeit sharp instrument that’s inserted through the skin into the breast. Some doctors do a simpler procedure, fine-needle aspiration, by which they remove cells or fluid from the breast using a small needle attached to a syringe.

In principle, a false positive biopsy result would occur only when a pathologist, a physician trained to examine tumors at the cellular and molecular levels, misreads a case, meaning that he or she reports that the cells appear cancerous when they’re not. Fortunately, this is not a frequent issue in breast cancer diagnosis and management.

The real issue about false positives – and what may be the heart of the issue in mammography screening – has to do with how pathologists describe and define some premalignant conditions and low-grade breast tumors. This concern extends well beyond the scope of this tentative outline, but a few key terms should facilitate future discussion:

Lobular Carcinoma in Situ (LCIS) is not considered a malignancy by most oncologists, but rather an abnormality of breast glands that can develop into breast cancer.

Ductal Carcinoma in Situ (DCIS) is a Stage 0 breast tumor – a tiny cancer of breast cells that have not penetrated through the cells lining the ducts of the breast gland.

Indolent or “slow” tumors – The idea is that some breast cancers grow so slowly there’s no need to find or treat these.**


*This definition warrants some discussion, to follow in a separate post.

**As a physician and trained oncologist, I am uncomfortable with the published notion of some breast tumors being “so slow” that they needn’t be found or evaluated. I include these tumors only for the sake of completeness regarding theoretical types of false positive results upon screening mammography, as there’s been considerable discussion of these indolent tumors in recent news.

Slow-growing breast tumors are quite rare in young women. In my view, their consideration has no bearing on the screening controversy at it pertains to women in their forties and fifties.


As outlined above, the first two categories of false positives seem relatively minor, in that they should be amenable to improvements in quality of mammography facilities and technology; the third category is huge and where lies the money, so to speak.

Clearly there’s more work ahead –

Related Posts:

A Bit More on False Positives, Dec 2009, Part 1

The question of false positives in breast cancer screening – why and how these happen, how often these occur, and how much these cost, in physical, psychological and financial terms – remains a puzzle.

A few weeks ago the New York Times Magazine featured a piece on “Mammogram Math” under the heading “The Way We Live Now.” The author, a mathematics professor, argues that the risks and costs of mammography, though incalculable, outweigh the benefits. The paper printed the article along with a subtitle, “Why evidence-based medicine is actually right and scary” and later published three letters including one truncated response by me.

After a hiatus, I’ve rescanned the literature – just to be sure the question hasn’t been resolved in the past few weeks by a much-needed interdisciplinary team of health care policy experts, economists,  statisticians, surgeons, radiologists, oncologists, nurses and for good measure, perhaps a few breast cancer patients and survivors.

There’s little published progress to report, aside from more hype and theoretical numbers such as I offered in a November essay. So I’ve decided to take the analysis a step further by outlining a tentative framework for thinking about false positives in breast cancer screening.

In a separate post, I will outline a proposed outline for categorizing false positives as they relate to mammography. Why bother, you might ask – wouldn’t it be easier to drop the subject?

Make it go away,” sang Sheryl Crow on her radiation sessions.

Instead, I’ll answer as might a physician and board-certified oncologist who happens to be a BC survivor in her 40s:

To determine the damage done to women by screening mammography (as some claim and refer as evidence) we need establish how often false positives lead, in current practice, to additional procedures such as sonograms (fairly often, but the costs are relatively small), MRIs (less standard and more expensive), breast biopsies (scarier, slightly risky and more valued – how else can a pathologist determine if a woman with a breast lesion has cancer and, in the future, what type of therapy is best) or frankly inappropriate treatments such as chemotherapy for a non-cancerous condition (very damaging and the most costly of all putative false positive outcomes).

These numbers matter. They’re essential to the claim that the risks of breast cancer screening outweigh the benefits.

Related Posts:

On Juno and Screening Test Stats

“Well, well” says the convenience store clerk. “Back for another test?”

“I think the first one was defective. The plus sign looks more like a division symbol, so I remain unconvinced,” states Juno the pregnant teenager.

“Third test today, mama-bear,” notes the clerk.

Juno recluses herself and uses a do-it-yourself pregnancy test in the restroom, on film.

“What’s the prognosis … minus or plus?” asks the clerk.

…”There it is. The little pink plus sign is so unholy,” Juno responds.

She’s pregnant, clearly, and she knows she is.

(from Juno the movie*)

Juno\’s pregnancy test
Think of how a statistician might consider Juno’s predicament – when a testing device is useful but sometimes gives an unclear or wrong signal.

Scientists use two terms – sensitivity and specificity – among others, to assess the accuracy of diagnostic tests. In general, these terms work best for tests that provide binary sorts of outcomes – “yes” or “no” type situations. Sensitivity refers to how well a screening tool detects a condition that’s really present (pregnancy, in the teenager’s case). Specificity, by contrast, measures how well a test reports results that are truly negative.**

Juno’s readout is relatively straightforward – a pink plus sign or, not; the possibilities regarding her true condition are few.

Still, even the simplest of diagnostic tests can go wrong. Errors can arise from mistakes in the procedure (a cluttered, dirty store is hardly an ideal lab environment), from flawed reagents (the package might be old, with paper that doesn’t turn vividly pink in case of pregnancy) or from misreading results (perhaps Juno needs glasses).

Why does this matter, now?

The medical and political news are dense with statistics on mammograms; getting a handle on the costs of cancer screening requires more information than most of us have at our disposal.

Of course, breast cancer is not like pregnancy. Among other distinguishing features, it’s not a binary condition; you can’t be a little bit pregnant.  (Both are complicated, I know.)

To get to the bottom of the screening issue, we’ll have to delve deeper, still.

*Thanks Juno, Dwight and everyone else involved in the 2007 film; details listed on IMBD.


**I was surprised to find few accessible on-line resources on stats. For those who’d like to understand more on the matter of sensitivity and specificity, I recommend starting with a 2003 article by Tze-Wey Loong in the British Medical Journal. This journal, with a stated mission to “help doctors to make better decisions” provides open, free access to anyone who registers on-line.

I’ll offer an example here, too:

To measure the accuracy of Juno’s kit, a statistician might visit a community of 100 possibly pregnant women who used the same type of device. If 20 of the women are indeed pregnant (as confirmed by another test, like a sonogram), but only 16 of those see the pink plus sign, the sensitivity of the test would be 16/20, or 80 percent. And if, among the 80 women who aren’t due, 76 get negative results, the specificity would be 76/80, or 95 percent.

False negatives: among the 20 pregnant women 4 find negative results; the false negative rate (FN) is 4/20, or 20 percent.

False positives: among the 80 women who aren’t pregnant 4 see misleading traces of pink; the false positive (FP) rate is 4/80, or 5 percent.

Related Posts:

Stats in the News!

False positives have hit the headlines.

Check the New York Times, Wall Street Journal, CNN – they’re everywhere. Even the Ladies’ Home Journal skirts the subject.

The discussion on mammography runs something like this: studies show that cancer screening save few lives. Among women younger than 50 years, there’s a high rate of false positive results. Those misleading tests lead to more imaging procedures such as sonograms and MRIs, additional biopsies and, necessarily, higher screening costs.

Women are ignoring the numbers, choosing reassurance over hard facts. Some say members of the pro-mammogram camp are irrational, even addicted.

The best response is to look carefully at the research findings.

Two recent publications sparked the current controversy: one, a single paper in the Journal of the American Medical Association and the other, a cluster of articles in the most recent Annals of Internal Medicine. Using a variety of research tools, the authors in both journals examine the effectiveness of cancer screening. Here, the investigators consider the risks and benefits of mammography from a medical perspective; they don’t focus on monetary aspects of the issue.

The problem of false positives in mammography is most-fully addressed in the AnnalsScreening for Breast Cancer: An Update for the U.S. Preventive Services Task Force. The authors assess, among other newsworthy subjects (such as the value of breast self-examination) the potential risks and benefits of mammography. In the Results section, they delineate five sorts of mammography-associated harms (see “key question 2a”):

1. Radiation exposure – not a big deal, the exposure level’s low;

2. Pain during procedures – women don’t mind this, at least not too much;

3. Anxiety, distress and other psychological responses – the patronizing terms tell all;

4. False-positive and false-negative mammography results, additional imaging, and biopsies – the subject of this and tomorrow’s posts;

5. Over-diagnosis – this interesting and, in my view, exaggerated issue warrants further discussion.

For now, let’s approach the problem of false positives in mammography (as in #4, above).

What is a false positive?

False positives happen in mammography when the images suggest the presence of a malignancy in a woman who doesn’t have cancer in her breast.

How often do these occur?

To their credit, the Annals authors state clearly: “published data on false-positive and false-negative mammography results, additional imaging, and biopsies that reflect current practices in the United States are limited…”

Before we can establish or even estimate the costs of false positives in screening mammography, medical or economic, we need to better define those and, then, establish the frequency with which they occur.

Turns out, the calculation’s not so simple as you might think.

Related Posts:

A Note on False Positives

A colleague sent me an email about my math. You’re more or less right, he said, but you need to account for the false positives.

I agree with him. (It’s true.)

The problem is, among others, how to present those numbers in the press.

Statistics don’t sell; still, an explanation is due.

In my next full post I’ll consider the meaning of false positives  – their significance and costs –  in the cancer screening debate.

Related Posts:

newsletter software