Notes on the Social History of American Medicine, Self Reliance and Health Care, Today

Over my vacation I read a bit on the history of health care in the United States. The Social Transformation of American Medicine, by Paul Starr, was first published in 1982. The author, a professor of sociology and public affairs at Princeton, gives a fascinating, relevant account in two chunks. In the first section, he details the rise of professional authority among physicians in the U.S. In the second part, he focuses on the relationship of doctors to corporations and government.

I couldn’t put this book down. Seriously, it’s a page-turner, at least in the first half, for anyone who cares about medical education, doctors’ work, and how people find and receive health care. In an early chapter, on medicine in colonial and early 19th Century America, Starr recounts the proliferation of medical schools and doctors, or so-called doctors, in the years after 1812. One problem of that era, besides a general lack of scientific knowledge about disease, was that it didn’t take much to get a medical degree. State licensing laws didn’t exist for the most part, and where they did come in place, such as in New York City, they were later rescinded. Then as now, many practicing folks didn’t want regulations.

Doctors were scarce and not always trustworthy. People, especially in rural areas, chose or had to be self-reliant. Many referred to lay sources for information. Starr writes of the “domestic” tradition of medical care:

…Women were expected to deal with illness in the home and to keep a stock of remedies on hand; in the fall, they put away medicinal herbs as they stored preserves. Care of the sick was part of the domestic economy for which the wife assumed responsibility. She would call on networks of kin and community for advice and illness when illness struck…

As he describes it, one book – William Buchan’s Domestic Medicine, was reprinted at least 30 times. It included a section on causes of disease and preventive measures, and a section on symptoms and treatments. By the mid 19th Century a book by John C. Gunn, also called Domestic Medicine, or Poor Man’s Friend…offered health advice in plain language.

Starr considers these and other references in the context of Protestantism, democracy and early American culture:

…while the domestic medical guides were challenging professional authority and asserting that families could care for themselves, they were also helping to lay the cultural foundations of modern medical practice – a predominantly secular view of sickness…the authority of medicine now reached the far larger number who could consult a physician’s book.

Reading this now, I can’t help but think of the Internet and other popular and accessible resources that challenge or compete with doctors’ authority. Other elements of Starr’s history pertain to current debates on medical education, credentialing and distribution of providers.

Just days ago, for example, the New York Times ran an editorial on a trend of getting Health Care Where You Work. The paper reported on Bellin Health, an allegedly non-profit entity, that designs on-site clinics for medium-sized companies. “It has managed to rein in costs while improving the availability and quality of care — in large part by making it easier for patients to see nurses and primary care doctors,” according to the Times opinion. The clinics are “staffed part-time by nurses, nurse practitioners or physician assistants, who handle minor injuries and illnesses, promote healthy living and conduct preventive screenings.”

The editorial touts Dartmouth Atlas data and other high marks for the care Bellin provides at low costs to possibly happy workers and their satisfied employers. Still, it’s not clear to me that an on-site clinic would be a great or even a good place to seek care if you had a subtle blood disorder or something like the newly-reported Heartland virus.

On reading the editorial on delivering health care to the workplace, I was reminded of Starr’s tale of the development of clinics at railroad and mining companies in the first half of the 20th Century. This happened mainly is rural areas where few doctors lived, at industry sites where injuries were frequent. The workers, by Starr’s account, were generally suspicious of the hired physicians and considered them inferior to private doctors whom they might choose if they became ill. They resented paying mandatory fees to support those on-site doctors’ salaries. Doctors’ groups, like the AMA, generally opposed and even ostracized those “company doctors” for selling out, or themselves, at a lower price.

The second half of the Social Transformation, on failed attempts at reform before 1982, is somewhat but not entirely outdated in light of Obamacare and 40 years intervening. But many of the issues, such as consideration of the “market” for doctors and the number of physicians we need, relate to the papers of this week including an Economix column by another Princeton professor, Uwe Reinhardt, who puts forth a view that, well, I don’t share. As I understand his position, Reinhardt suggests that there may be no real shortage of doctors, because physicians can always scrunch their workloads to fit the time allotted. But that’s a separate matter…

In sum, on the Social Transformation, today: Worthwhile! Curious! Pertinent! Starr’s book is chock full of history “lessons” that might inform medical practice in 2012. And I haven’t even mentioned my favorite segments – on prohibiting doctors’ advertisements (think websites, now), the average workload of physicians before 1900 (think 5 or so patients per day), and the impact of urbanization on medical care and doctors’ lives and specialization.

Lots to think about, and read.

All for now,

ES

 

Related Posts:

Why Hurricanes Remind Me of Patient Care

Last week, Tropical Storm Isaac started tracking toward the Gulf of Mexico. As usual, the prediction models offered varying forecasts. Nonetheless, by this weekend a consensus emerged that the tempestuous weather system would, most likely, affect the City of New Orleans.

National Hurricane Center image

The Mayor, Mitch Landrieu, didn’t panic. I watched him on TV on Sunday evening in an interview with CNN’s Wolf Blitzer and Erin Burnett. Isaac wasn’t a hurricane yet, although a Category I or II storm was predicted by then. He didn’t order an evacuation. Rather, he emphasized the unpredictable nature of storms. There’d be business as usual the next day, on Monday morning August 27. Mind the weather reports, and do what you need to do, he suggested to the citizens. He did mention there’d be buses for people who registered.

“Don’t worry,” was the gist of his message to the citizens of New Orleans. The levees should hold. He exuded confidence. Too much, perhaps.

Some people are drawn to leaders – or doctors – who blow off signs of a serious problem. “It’s nothing,” they might say to a woman who fell after skiing and hit her head, or to a man with a history of lymphoma who develops swollen glands and fever. It’s trendy, now, and sensible, to be cost-conscious in medical care. This is a terrific approach except when it misses a treatable and life-threatening condition or one that’s much less expensive to fix earlier than later.

“Every storm is different,” meteorologist Chad Myers informs us.

Like tumors. Sometimes you see one that should have a favorable course, like a node-negative, estrogen-receptor breast tumor in a 65 year old woman, but it spreads to a woman’s bones within a year. Or a lymphoma in a 40 year old man that looks to be aggressive under the light microscope but regresses before the patient has gone for a third opinion. But these are both exceptions. Cancer can be hard to predict; each case is a little different. Still, there are patterns and trends, and insights learned from experience with similar cases and common ways of spreading. Sometimes it’s hard to know when to treat aggressively. Other times, the pathology is clear. Sometimes you’re wrong. Sometimes you’re lucky….

In New Orleans, the Mayor’s inclination was to let nature take its course. He’s confident in the new levees, tested now by Isaac’s slow pace and prolonged rains. I do hope they hold.

Related Posts:

Talking About Physician Burnout, and Changing the System

Dear Readers,
I have a new story at the Atlantic Health. It’s on burnout among physicians. The problem is clear: Too many have a hard time finding satisfaction in the workplace. Many struggle with work-life balance and symptoms of depression.

With many difficult situations, the first step in solving a problem is in acknowledging it exists. After that, you can understand it and, hopefully, fix it. Our health care system now, as it functions in most academic medical centers and dollar-strapped hospitals, doesn’t give doctors much of a break, or slack, or “joy,” as Dr. Vineet Arora suggested in an interview. You can read about it here. The implications for patients are very real.

Glad to see that research is ongoing about physicians’ stress, fatigue and depression. Thank you to Drs. Tait Shanafelt, Mary Brandt, Vineet Arora and others for addressing these under-studied and under-discussed issues in medicine. Through this kind of work, policy makers and hospital administrators might better know how to keep doctors in the workforce, happy and healthy.

ES

Related Posts:

A Closer Look at the Details on Mammography, in Between the Lines

Recently I wrote a review of Between the Lines, a helpful handbook on bio-medical statistics authored by an acquaintance and colleague, Dr. Marya Zilberberg. In that post, I mentioned my concern about some of the assumptions and statements on mammography. One thing I liked the book, abstractly, is the author’s efforts to streamline the discussion so that the reader can follow the concepts. But simplification and rounding numbers, “for ease of presentation” (p. 29) can mess up facts, significantly in ways that some primary care doctors and journalists might not appreciate. And so I offer what I hope is a clarification, or at least an extension of my colleague’s work, for purposes of helping women understand the potential benefits and risks of mammography.

In the section on mammography (pp. 28-31), the author rounds down the incidence of breast cancer in women between the ages of 40 and 50 years, from “1 in 70” (1.43%) to “1 in 100” (1%). As any marketing professional might remind us, this small change represents a 30% drop (0.43/1.43) in the rate of breast cancer in women of that age group. This difference – of 30%, or 43%, depending on how you look at it – will factor into any calculation of the false positive (FP) rate and the positive predictive value (PPV) of the test.

For women ages 40-49 Have breast cancer Don’t have breast cancer
If estimate 1 in 100, 1.0 % 100 9,900
If estimate 1 in 70, 1.43 % 143 9,857

Keep in mind that these same, proportional difference would apply to any BC screening considerations – in terms of the number of women affected, the potential benefits and costs, for the 22,996,493 women between the ages of 40 and 49 counted in the 2010 U.S. Census,

My colleague estimates, fairly for this younger age group of women (who are relatively disposed to fast-growing tumors), that the screening technology (mammography) only picks up 80% of cases; 20% go undetected. In other words – the test is 80% sensitive; the false negative, FN, rate is 20%. In this same section, she considers that the FP rate as 10%. Let’s accept this (unacceptably high) FP rate for now, for the sake of discussion.

As considered in Between the Lines:

If FP rate is 10%, prevalence 1 in 100 Really have BC Don’t have BC Total
Mammography + 80 990 1,070
Mammography – 20 8,910 8,930
Total 100 9,900 10,000

But the above numbers aren’t valid, because the disease affects over 1 in 70 women in this age bracket. Here’s the same table with a prevalence of 1 in 70 women with BC:

If FP rate is 10%, prevalence 1 in 70 Really have BC Don’t have BC Total
Mammography + 114 986 1,100
Mammography – 29 8,871 8,900
Total 143 9,857 10,000

In this closer approximation to reality, the number of true positives is 114, and false positives 986, among 1,100 abnormal screening results. Now, the PPV of an abnormal mammogram is 114/ (114+986) = 10.4%. So the main statistical point – apart from the particulars of this discussion –  is that a seemingly slight rounding down can have a big impact on a test’s calculated and perceived value. By adjusting the BC rate to its prevalence of approximately 1 in 70 women between 40 and 49 years, we’ve raised the PPV from 7.5% to 10.4%.

Here I must admit that I, too, have rounded, although I did so conservatively very slightly. I adopted a 1 in 70 approximation (1.43%) instead of 1 in 69 (1.45%), as indicated on the NCI website. If we repeat the table and figures using a 1 in 69 or 1.45% prevalence rate and 6% FPS, the PPV rises a tad, to 10.5%.

Now, we might insert a different perspective: What if the false positive rate were 6%, as has been observed among sub-specialist radiologists who work mainly in breast cancer screening?

If FP rate is 6%, prevalence 1 in 70 Really have BC Don’t have BC Total
Mammography + 114 591 705
Mammography – 29 9266 9,295
Total 143 9,857 10,000

As you can see, if we use a FP rate of 6% in our calculations, the total number of FPs drops to 591 among 10,000 women screened. In this better-case scenario, the PPV of the test would = 114/ (114+591) =16%. Still, that’s not great – and I’d argue that public health officials, insurers and patients should be pushing for FP rates closer to 2 or 3% – but that’s irrelevant to my colleague’s point and her generally instructive work.

My second concern has to do with language, and making the consequences of false positives seem worse than they really are. On page 29, the author writes: “ So, going back to the 10,000 women being screened, of 9,900 who do NOT have cancer… 10%, or 990 individuals will still be diagnosed as having cancer.” The fact is, the overwhelming majority of women with positive mammograms won’t receive a cancer diagnosis. Rather, they’ll be told they have “an abnormal result, or a finding that suggests the possibility of cancer and needs further evaluation,” or something along those lines. It would be unusual in practice to jump from a positive mammogram straight to a breast cancer diagnosis. There are steps between, and every patient and journalist should be aware of those.


Finally, if I were to write what I really think, apart from and beyond Between the Lines – I’d suggest the FP rate should be no higher than 2 or 3% in 2012. This is entirely feasible using extant technology, if we were to change just two aspects of mammography practice in the U.S. First, require that all mammograms be performed by breast radiologists who get extra training and focus in their daily work almost exclusively on breast imaging. Second, make sonograms – which, together with mammograms, enhance the specificity of BC screening in women with dense breasts– universally available to supplement the radiologists’ evaluations of abnormal mammograms and dense breasts in younger women.

By implementing these two changes, essentially supporting the practice of sub-specialists in breast radiology, we could significantly lower the FP rate in breast cancer screening. The “costs” of those remaining FPs could be minimized by judicious use of sonograms, needle biopsies and other measures to reduce unnecessary surgery and over-treatment. Over the long haul, we need to educate doctors not to over-treat early stage disease, but that goes far beyond this post and any one woman’s analysis of mammography’s effectiveness.

All for now,
ES

Related Posts:

Reading Between the Lines, and Learning from an Epidemiologist

Early on in Between the Lines, a breezy new book on medical statistics by Dr. Marya Zilberberg, the author encourages her readers to “write, underline, highlight, dog-ear and leave sticky notes.” I did just that. Well, with one exception; I didn’t use a highlighter. That’s partially due to my fear of chemicals, but mainly because we had none in my home.

I enjoyed reading this book, perhaps more than I’d anticipated. Maybe that’s because I find the subject of analyzing quantitative data, in itself, dull. But this proves an easy read: it’s short and not boring. The author avoids minutia. Although I’m wary of simplified approaches – because as she points out, the devil is often in the details of any study – this tact serves the reader who might otherwise drop off this topic. Her style is informal. The examples she chooses to illustrate points on medical studies are relevant to what you might find in a current journal or newspaper this morning.

Over the past year or two, I have gotten to know Dr. Zilberberg, just a bit, as a blogging colleague and on-line associate. This book gave me the chance to understand her perspective. Now, I can better “see” where she’s coming from.

There’s a lot anyone with an early high school math background, or a much higher level of education, might take away from this work. For doctors who’ve attended four-year med schools and, of course, know their stats well (I’m joking, TBC*), this book provides an eminently readable review of basic concepts – sensitivity, specificity, types of evidence, types of trials, Type II errors, etc. For those, perhaps pharmacy student, journalists and others, looking for an accessible source of information on terms like “accuracy” or HTE (heterogeneous treatment effect), Between the Lines will fill you in.

The work reads as a skinny, statistical guidebook with commentary. It includes a few handy tables – on false positives and false negatives (Chapter 3), types of medical studies (Chapter 14), and relative risk (Chapter 19). There’s considered discussion of bias, sources of bias, hypothesis testing and observational studies. In the third chapter the author uses lung cancer screening scenarios to effectively explain terms like accuracy, sensitivity and specificity in diagnostic testing, and the concept of positive predictive value.

Though short, this is a thoughtful, non-trivial work with insights. In a segment on hierarchies of evidence, for example, the author admits “affection for observational data.” This runs counter to some epidemiologists’ views. But Zilberberg defends it, based on the value of observational data in describing some disease frequencies, exposures, and long-term studies of populations. In the same chapter, she emphasizes knowing – and stating – the limits of knowledge (p. 37): “…I do think we (and the press) do a disservice to patients, and to ourselves, and to the science if we are not upfront about just how uncertain much of what we think we know is…”

Mammography is, not surprisingly, one of few areas about which I’d take issue with some of the author’s statements. For purposes of this post and mini-review, I’ll leave it at that, because I think this is a helpful book overall and in many particulars.

Dr. Zilberberg cites a range of other sources on statistics, medical studies and epistemology. One of my favorite quotes appears early on, from the author directly. She considers the current, “layered” system of disseminating medical information through translators, who would be mainly physicians, to patients, and journalists, to the public. She writes: “I believe that every educated person must at the very least understand how these interpreters of medical knowledge examine, or should examine, it to arrive at the conclusions.”

This book sets the stage for richer, future discussions of clinical trials, cancer screening, evidence-based medicine, informed consent and more. It’s a contribution that can help move these dialogues forward. I look ahead to a continued, lasting and valuable conversation.

 —

*TBC = to be clear

Related Posts:

FDA Approves Pertuzumab for Advanced, Her2+ Breast Cancer

We’re on a roll for new treatments of the Her2+ form of breast cancer. On Friday the FDA approved Pertuzumab, a monoclonal antibody, for advanced cases. As indicated, the drug would be given along with another monoclonal antibody, trastuzumab (Herceptin) and a chemotherapy, docetaxel (Taxotere) to patients with advanced breast tumors with high levels of Her2.

The new treatment’s brand name is Perjeta. Like Herceptin, this reagent works by attaching to the Her2 receptor on a cancer cell’s surface. But it differs by binding a distinct part of the molecule; its mechanism of action is said to complement that of Herceptin.  You might recall that HER protein family members are complex signaling molecules that span cell membranes. Her1 is the Epidermal Growth Factor Receptor; it’s turned on when bound by its partner, or molecular ligand, Epidermal Growth Factor(EGF). The others are Her2, -3 and -4.

EGFR (Her1) signaling, Wiki-Books image

The science behind drugs that interfere with Her2 receptors and signaling is nicely summarized in a recent, open-access Nature Reviews Clinical Oncology article. Herceptin binds a particular segment of Her2 on the outside of the cell; this leads to failed signaling on the inside, including cell division signals, and causes cell death by several mechanisms. Pertuzumab binds a distinct segment of Her2 in such a way that it can’t form a complex with the related Her3 molecule; this interaction is needed for Her2 to stimulate cell growth.

The FDA’s approval rests largely on results of the CLEOPATRA (Clinical Evaluation of Pertuzumab and Trastuzumab) study, published earlier this year in the NEJM. In that Phase III study, just over 800 patients were randomly assigned to receive a standard regiment – Herceptin in combination with Taxotere plus a placebo infusion, or the Herceptin-Taxotere combination plus Pertuzumab.

The patients who received Pertuzumab did better in terms of Progression Free Survival (PFS, 18.5 months vs. 12.4 months; this difference holds strong statistical merit). There is a trend, also, in terms of Overall Survival: at a median follow-up point of 19.3 months, there were more deaths in the placebo group. But a statistically significant difference was not reached. Toxicity was reported as “generally similar” in the two groups, but there was more diarrhea, dry skin and rashes among those who got Pertuzumab (Table 3). Heart problems, a known toxicity of the Herceptin-Taxotere regimen, were slightly less common with Pertuzumab. Hair loss, presumably from the chemotherapy part of the regimen, was common in both groups.

One curious thing I noticed, in re-reading the January report, is that although the median age for both patient groups was 54 years, the control patients ranged from 27 – 89 in age; those who got Pertuzumab ranged from 22 – 82 years. Although the younger “shift” of Pertuzumab-receiving patients relative to the controls is unlikely to affect the PFS, it’s odd to include an 89 year old patient on an experimental protocol involving infusions of two monoclonal antibodies along with chemo.

This is a super-costly regimen. Like Herceptin, and like the experimental compound antibody, DM1, about which I wrote last week, Perjeta is manufactured by Genentech. As detailed by Andrew Pollack in the NY Times: the wholesale price for Perjeta will be $5,900 per month for a typical woman; Herceptin costs $4,500 per month. So we’re talking about a treatment in which the monoclonal antibodies alone cost over $10K per month. “A typical 18-month course of treatment would be more than $187,000,” he indicated. But if you add on the costs of the Taxotere, drugs like Benadryl and Decadron to minimize allergic reactions, anti-nausea meds, charges for the infusion and monitoring…It’ll be a lot more than that.

As the FDA notes in its press release, production of Perjeta is currently limited due to a technical issue at the Genentech manufacturing plant. Meanwhile, investigators, doctors and patients will have to sort out the relative value of this drug, on top of the others – including pills – for Her2+ disease.

My opinion is not quite formed on this new antibody. The FDA’s decision was based on results from one trial of 808 patients, half of whom didn’t get the experimental drug. Accrual began in 2008; its broad clinical effects, and long-term toxicities, can’t be established yet. It may be, ten years from now, that Perjeta will be used routinely in patients with other, Her2+ kinds of cancer. Or it may be a toxic bust.  How (and if) we’ll test and compare different doses of Perjeta and potential combinations with other drugs, small pills and traditional chemotherapies – which are many – is not clear. You could, for example, combine one or both of the antibodies with a drug like Lapatinib (Tykerb), that inhibits Her2-triggered growth signals inside the cell.

The problem is that oncologists, and facilities including academic centers where revenue is generated by giving drugs by infusion, now have a huge financial incentive to give the Herceptin-Perjeta-Taxotere regimen. This regimen is approved for first-line treatment of metastatic, Her2+ breast cancer; you don’t have to have “failed” another regimen, as was required for the EMILIA trial. As I understand this approval, an oncologist seeing a woman with recurrent or metastatic Her2+ breast cancer could, immediately, prescribe the 3-drug combination.

It’s impressive that the CLEOPATRA folks included an 89 year old patient in the study. But at some point, you have to wonder where we might draw lines. I’ve no answers on that.

All for now, maybe for the week,

ES

Related Posts:

EMILIA Trial: T-DM1 Appears Helpful in Women with Her2+ Metastatic Breast Cancer

This weekend the American Society of Clinical Oncology (ASCO), to which I belong, is holding its annual meeting in Chicago. Some of the biggest buzz has to do with a new breast cancer drug called T-DM1. ASCO just lifted embargo of the relevant abstract.

The new agent is a hybrid of an old monoclonal antibody, Herceptin, that’s chemically attached to DM1, a traditional kind of chemotherapy. The chemo part, DM1 – also known as emtansine – is manufactured by ImmunoGen. It’s derived from maytansine, a compound that binds tubulin, a protein critical for microtubule formation in dividing cells. According to the NCI website, this chemical, which has antibiotic properties, was extracted from an Ethiopian plant, Maytenus serrata.

T-DM1 was designed by linking the DM1 compound to the trastuzumab (Herceptin) antibody. Trastuzumab is old news in breast cancer. It binds a signaling molecule, Her2, that’s expressed at high levels in approximately 1 in 5 breast tumors. The FDA approved Herceptin for use in patients with metastatic, Her2+ breast cancer in 1998 and, for some women with localized, lymph node positive disease, in 2006. In this new, hybrid drug, the antibody works like a tagged, toxic messenger. In effect, the antibody delivers and inserts the chemo into the malignant cell, where it causes cell death.

The new data, from the Genentech-sponsored EMILIA trial, were presented today:

The Phase III study evaluated 991 women with metastatic breast cancer. All participants had tumors with high levels of Her2 (confirmed in a central pathology lab, for the trial). All had disease that progressed despite treatment with Herceptin and, in most cases, other drugs too. After randomization, 978 women received either of two treatments: the experimental agent, T-DM1, every 3 weeks, by intravenous infusion, or a combination of two pills, “XL” – Xeloda (capecitabine) and Tykerb (lapatinib). Median follow-up was just over 1 year – not bad for a study of MBC, but not great, either.

The big news is this: Among the patients who got the experimental drug, T-DM1, the median time until disease progressed was 9.6 month; for those who took the XL pill combination, it was 6.4 months. This different was statistically significant. Although a difference of 3 months may not sound like much – and isn’t – each regimen in the study held the women’s disease in check for over half a year.

It’s striking that T-DM1 was used as a single agent. Most chemotherapy drugs, like those for HIV, work best in combination; it could be that we’ll see more powerful results in a few years, once we learn how to optimally combine drugs for women with Her2+ breast cancer.

As far as overall survival, the initial results seem quite favorable. Among women on the study for 2 years, 65 percent were alive who received T-DM1; among those taking the XL regimen, 47 percent were alive at 2 years. (The statistical details for this comparison are not available; evidently it was of weak significance.) The problem is – if only a few patients were analyzed “so far out” on the survival curves, the difference observed between the two study arms might be random. Still, and independently of the comparison, survival of 65 percent at 2 years in this patient group is (sadly) impressive, especially if it comes by a single agent with comparatively few side effects.

The main T-DM1 toxicities were low platelets and abnormal liver function, which were, reportedly, reversible. The XL combination caused more toxicity, overall, including diarrhea, hand-foot syndrome and nausea.  A much greater fraction of women on the XL arm (53 and 27 percent, respectively for Xeloda and lapatinib) needed dose reductions, as compared to the T-DM1 (16 percent had dose reductions due to toxicity). Evidently hair loss isn’t an issue for women who get T-DM1, which is nice.

My main, initial concerns are two:

First, the study, though randomized, is not “blinded,” and can’t be.  It’s impossible for women who are getting an intravenous drug, and their doctors, not to know that they’re not on the pill study arm. Although there were independent evaluators of progressive disease, which is a far more subjective measurement than overall survival, progression free survival can be influenced by the doctors’ and patients’ knowing they’re getting the T-DM1. That said, the initial, observed difference in overall survival – a clear, objective measurement – is impressive.

If these trial results, published in abstract form, pan out, and the quality of patients’ lives is maintained, that’d be helpful to as many as 1 in 5 women with MBC. It is plausible that an antibody like Herceptin that targets the tumor cells could, in fact, “deliver” the chemo effectively into the cancer cells with relatively low toxicity. And if the women are feeling better, which is hard to know from an abstract, great.

My second concern is how this drug will mesh with others now available and in the pipeline for patients with Her2+ disease. In a December, 2010 editorial in the JCO, two clinical investigators wrote: “the unique aspect of T-DM1 is clearly its high clinical activity by itself, without the need of concomitant additional systemic chemotherapy.” They’re right. The question – as considered by those authors – is how T-DMI will be used in the context of expanding treatment options for women with Her2+ breast cancer. This is an expensive (price not yet known) monoclonal antibody-conjugate that’s necessarily given by infusion. Testing this drug against all the other current and up-and-coming alternatives, in varying combinations and doses, will be tricky. The trials alone will cost big bucks, besides toxicity and women’s lives.

These EMILIA results are promising for some women with MBC and, possibly, patients with other cancer forms in which Her2 is expressed. Unfortunately, it’s unlikely to help those women with breast cancer whose tumors that lack Her2+.

I’ll write soon about this new class of oncology drugs – antibodies conjugated to chemotherapies, as a group.

All for now,

ES

Related Posts:

A Picture of Periwinkle

Periwinkle plant – the source of Vincristine, a chemotherapy 

Dear Readers,

Your author has been busy writing other things, and revamping this site. Medical Lessons is, if nothing else, a work in progress.

For this week, I thought I’d simply share this image of periwinkle, Catharanthus roseus. From this plant comes an old chemotherapy drug,  called Vincristine (Oncovin). When I practiced, I used this agent to treat people with lymphoma, some forms of leukemia, Kaposi’s sarcoma and, rarely, patients with life-threatening cases of low platelets from an immune condition called ITP.

All for now,

ES

Related Posts:

10 Newly-Defined Molecular Types of Breast Cancer in Nature, and a Dream

Breast cancer is not one disease. We’ve understood this for decades. Still, and with few exceptions, knowledge of BC genetics – information on tumor-driving DNA mutations within the malignant cells – has been lacking. Most patients today get essentially primitive treatments like surgical hacking, or carving, traditional chemotherapy and radiation. Some doctors consider hormone therapy as targeted, and thereby modern and less toxic. I don’t.

Until there’s a way to prevent BC, we need better ways to treat it. Which is why, upon reading the new paper in Nature on genetic patterns in breast cancer, I stayed up late, genuinely excited. As in thrilled, optimistic..The research defined 10 molecular BC subgroups. The distinct mutations and gene expression patterns confirm and suggest new targets for future, better therapy.

The work is an exquisite application of science in medicine. Nature lists 31 individuals and one multinational research group, METABRIC (Molecular Taxonomy of Breast Cancer International Consortium), as authors. The two correspondents, Drs. Carlos Caldas and Samuel Aparicio, are based at the University of Cambridge, in England, and the University of British Columbia in Vancouver, Canada. Given the vastness of the supporting data, such a roster seems appropriate, needed. The paper, strangely and for all its worth, didn’t get much press –

Just to keep this in perspective – we’re talking about human breast cancer. No mice.

The researchers examined nearly 2000 BC specimens for genetic aberrations, in 2 parts. First, they looked at inherited and acquired mutations in DNA extracted from tumors and, when available, from nearby, normal cells, in 997 cancer specimens – the “discovery set.” They checked to see how the genetic changes (SNPs, CNAs and/or CNVs) correlated with gene expression “landscapes” by probing for nearly 29,000 RNAs. They found that both inherited and acquired mutations can influence BC gene expression. Some effects of “driver” mutations take place on distant chromosomal elements, in what’s called a trans effect; others happen nearby (cis).

Next, they honed in on 45 regions of DNA associated with outlying gene expression. This led the investigators to discover putative cancer-causing mutations (accessible in supplementary Tables 22-24, available here). The list includes genes that someone like me, who’s been out of the research field for 10 years, might recall – PTEN, MYC, CDK3 and -4, and others. They discovered that 3 genes, PPP2R2A, MTAP and MAP2K4 are deleted in some BC cases and may be causative. In particular, they suggest that loss of PPP2R2A may contribute to luminal B breast cancer pathology. They find deletion of MAP2K4 in ER positive tumors, indicative of a possible tumor suppressor function for this gene in BC.

Curtis, et al. in "Nature": April 2012

The investigators looked for genetic “hotspots.” They show these in Manhattan plots, among other cool graphs and hard figures, on abnormal gene copy numbers (CNAs) linked to big changes in gene expression. Of interest to tumor immunologists (and everyone else, surely!), they located two regions in the T-cell receptor genes that might relate to immune responses in BC. They delineated a part of chromosome 5, where deletions in basal-like tumors marked for changes in cell cycle, DNA repair and cell death-related genes. And more –

Cluster Analysis (abstracted), Wikipedia

Heading toward the clinic, almost there…

They performed integrative cluster analyses and defined 10 distinct molecular BC subtypes. The new categories of the disease, memorably labeled “IntClust 1-10,” cross older pathology classifications (open-access: Supplementary Figure 31) and, it turns out, offer prognostic information based on long-term Kaplan-Meier analyses (Figure 5A in the paper: Supplementary Fig 34 and 35). Of note, here, and a bit scary for readers like me, is identification of an ER-positive group, “IntClust 2” with 11q13/14 mutations. This BC genotype appears to carry a much lesser prognosis than most ER-positive cases.

Finally, in what’s tantamount to a 2nd report, the researchers probed a “validation set” of 995 additional BC specimens. In a partially-shortened method, they checked to see if the same 10 molecular subtypes would emerge upon a clustering analysis of paired DNA mutations with expression profiles. What’s more, the prognostic (survival) information held up in the confirmatory evaluation. Based on the mutations and gene expression patterns in each subgroup, there are implications for therapy. Wow!

I won’t review the features of each type here for several reasons. These are preliminary findings, in the sense that it’s a new report, albeit a model of what’s a non-incremental published set of observations and analysis; it’s early for patients – but not for investigators – to act on these findings. (Hopefully, this will not be the case in 2015, or sooner, preferably, for testing some pertinent drugs in at least a subset of the subgroups identified.) Also, some of the methods these authors used came out in the past decade, after I stopped doing research. It would be hard for most doctors to fully appreciate the nuances, strengths and weaknesses of the study.

Most readers can’t know how skeptical I was in the 1990s, when grant reviewers at the NCI seemed to believe that genetic info would be the cure-all for most and possibly all cancers. I don’t think that’s true, nor due most people involved with the Human Genome Project, anymore. The Cancer Genome Atlas and Project should help in this regard, but they’re young projects, larger in scope than this work, and don’t necessarily integrate DNA changes with gene expression as do the investigators in this report. What’s clear, now, is that some cancers do respond, dramatically, to drugs that target specific mutations. Recently-incurable malignancies, like advanced melanoma and GI stromal tumors, can be treated now with pills, often with terrific responses.

Last night I wondered if, in a few years, some breast cancers might be treated without surgery. If we could do a biopsy, check for the molecular subtype, and give patients the right BC tablets. Maybe we’d just give just a tad of chemo, later, to “mop up” any few remaining or residual or resistant cells. The primary chemotherapy might be a cocktail of drugs, by mouth. It might be like treating hepatitis C, or tuberculosis or AIDS. (Not that any of those are so easy.) But there’d be no lost breasts, no reconstruction, no lymphedema. Can you imagine?

Even if just 1 or 2 of these investigators’ subgroups pans out and leads to effective, Gleevec-like drugs for breast cancer, that would be a dream. This can’t happen soon enough.

With innovative trial strategies like I-SPY, it’s possible that for patients with particular molecular subgroups could be directed to trials of small drugs targeting some of the pathways implicated already. The pace of clinical trials has been impossibly slow in this disease. We (and by this I mean pharmaceutical companies, and oncologists who run clinical trials, and maybe some of the BC agencies with funds to spend) should be thinking fast, way ahead of this post –

And given that this is a blog, and not an ordinary medical publication or newspaper, I might say this: thank you, authors, for your work.

Related Posts:

A JAMA Press Briefing on CER, Helicopters and Time for Questions

This week the Journal of the American Medical Association, JAMA, held a media briefing on its current, Comparative Effectiveness Research (CER) theme issue. The event took place in the National Press Club. A doctor, upon entering that building, might do a double-take waiting for the elevator, curious that the journalists occupy the 13th floor – what’s absent in some hospitals.

CER is a big deal in medicine now. Dry as it is, it’s an investigative method that any doctor or health care maven, politician contemplating reform or, maybe, a patient would want to know. The gist of CER is that it exploits large data sets – like SEER data or Medicare billing records – to examine outcomes in huge numbers of people who’ve had one or another intervention. An advantage of CER is that results are more likely generalizable, i.e. applicable in the “real world.” A long-standing criticism of randomized trials – held by most doctors, and the FDA, as the gold standard for establishing efficacy of a drug or procedure – is that patients in research studies tend to get better, or at least more meticulous, clinical care.

The JAMA program began with an intro by Dr. Phil Fontanarosa, a senior editor and author of an editorial on CER, followed by 4 presentations. The subjects were, on paper, shockingly dull: on carboplatin and paclitaxel w/ and w/out bevacizumab (Avastin) in older patients with lung cancer; on survival in adults who receive helicopter vs. ground-based EMS service after major trauma; a comparison of side effects and mortality after prostate cancer treatment by 1 of 3 forms of radiation (conformal, IMRT, or proton therapy); and – to cap it off – a presentation on PCORI‘s priorities and research agenda.

I learned from each speaker. They brought life to the topics! Seriously, and the scene made me realize the value of meeting and hearing from the researchers, directly, in person. But, NTW, on ML today we’ll skip over the oncologist’s detailed report to the second story:

Dr. Adil Haider, a trauma surgeon at Johns Hopkins, spoke on helicopter-mediated saves of trauma patients. Totally cool stuff; I’d rate his talk “exotic” – this was as far removed from the kind of work I did on molecular receptors in cancer cells as I’ve ever heard at a medical or journalism meeting of any sort –

Haider indulged the audience, and grabbed my attention, with a bit of history:  HEMS, which stands for helicopter-EMS, goes back to the Korean War, like in M*A*S*H. The real-life surgeon-speaker at the JAMA news briefing played a music-replete video showing a person hit by a car and rescued by helicopter. While he and other trauma surgeons see value in HEMS, it’s costly and not necessarily better than GEMS (Ground-EMS). Helicopters tend to draw top nurses, and they deliver patients to Level I or II trauma centers, he said, all of which may favor survival and other, better outcomes after serious injury. Accidents happen; previous studies have questioned the helicopters’ benefit.

The problem is, there’s been no solid randomized trial of HEMS vs. GEMS, nor could there be. (Who’d want to get the slow pick-up with a lesser crew to a local trauma center?) So these investigators did a retrospective cohort study to see what happens when trauma victims 15 years and older are delivered by HEMS or GEMS. They used data from the National Trauma Data Bank (NTDB), which includes nearly 62,000 patients transported by helicopter and over 161,000 patients transported by ground between 2007 and 2009. They selected patients with ISS (Injury severity scores) above 15. They used a “clustering” method to control for differences among trauma centers, and otherwise adjusted for degrees of injury and other confounding variables.

“It’s interesting,” Haider said. “If you look at the unadjusted mortality, the HEMS patients do worse.” But when you control for ISS, you get a 16% increase in odds of survival if you’re taken by helicopter to a Level I trauma center. He referred to Table 3 in the paper.  This, indeed, shows a big difference between the “raw” and adjusted data.

In a supplemental video provided by JAMA (starting at 60 seconds in):

When you first look, across the board, you’ll see that actually more patients transported by helicopter, in terms of just the raw percentages, actually die.” – Dr. Samuel Galvagno (DO, PhD), the study’s first author.

The video immediately cuts to the senior author, Haider, who continues:

But when you do an analysis controlling for how severely these patients were injured, the chance of survival improves by about 30 percent, for those patients who are brought by helicopter…

Big picture:

What’s clear is that how investigators adjust or manipulate or clarify or frame or present data – you choose the verb – yields differing results. This capability doesn’t just pertain to data on trauma and helicopters. In many Big Data situations, researchers can cut information to impress whatever point they choose.

The report offers a case study of how researchers can use elaborate statistical methods to support a clinical decision in a way that few doctors who read the results are in a position to grasp, to know if the conclusions are valid, or not.

A concluding note –

I appreciated the time allotted for Q&A after the first 3 research presentations. There’s been recent, legitimate questioning of the value of medical conferences. This week’s session, sponsored by JAMA, reinforced to me the value of meeting study authors in person, and having the opportunity to question them about their findings. This is crucial, I know this from my prior experience in cancer research, when I didn’t ask enough hard questions of some colleagues, in public. For the future, at places like TEDMED – where I’ve heard there was no attempt to allow for Q&A – the audience’s concerns can reveal problems in theories, published data and, constructively, help researchers fill in those gaps, ultimately to bring better-quality information, from any sort of study, to light.

Related Posts:

Review: Dr. Eric Topol’s Creative Destruction of Medicine

Before reading Dr. Eric Topol’s Creative Destruction of Medicine, I wasn’t sure what to expect. Topol, a cardiologist with a background in genetics, was a prominent figure in the take-down of Vioxx. He was at the Cleveland Clinic back then, around 2004, and has since moved to direct the Translational Science Institute at Scripps. He was a few years ahead of me in academic medicine and, by almost any parameter, far more successful.

He’s a TED speaker, I knew. From the TED bio: “Eric Topol uses the study of genomics to propel game-changing medical research.” His work sounds exciting! I first read of the new book in a recent, tech-minded interview in Wired. Seemed like it might be all theory, no touch-y, little reality. With this lead-in, I wasn’t quite prepared to like this book, although I was interested.

Topol’s book is fantastic. I couldn’t put it down because it’s chock-full of good, critical ideas about clinical medicine. The title, “Creative Destruction,” is a reference to Joseph Schumpeter’s theory of radical transformation through innovation. In Chapter 1, he outlines the “Digital Landscape” and explains, simply, how a convergence of advances in technology over the past 40 years – like personal computers, cell phones, the Internet, connectivity and instant access to data – have set the stage for a dramatic shift in medical culture and practice. Doctors, for some reason, have been slow to adapt digital technology to health care, but this is changing, fast.

One theme that emerges through the book is the capacity for technology – by “knowing” and processing so much real-time information about each person’s condition – to inform more effective, individualized treatments. This comes up in his critique of evidence-based medicine and later, when he considers progress in molecular oncology and again, in a section on the pitfalls of old-fashioned, large clinical trials involving many (hundreds or thousands of) patients unlikely to benefit.

Topol’s comfortable writing about the intersection of science and medicine as few physicians are. He describes several clinical episodes, like when the first patient with a stroke received TPA, a clot-dissolving agent. The point is, he’s been there, at some of the world’s best hospitals, where innovative treatments have been applied. But he’s also seen first-hand disappointment, too. This grounds the work. There’s a long chapter on “Biology” which offers, among other insights, a realistic critique of genetic information that many doctors don’t understand. He identifies value in hypothesis-free research, and considers high-throughput screening.

I should mention two provocative details, among many. One appears in Chapter 3, on “empowered” medical consumers. At the Cleveland Clinic Foundation, where he’d worked and served on the Board of Governors, Topol observed busy, otherwise-occupied trustees who contributed significant time and money to the hospital. One reason they did so, he says, was so they might have access to the best doctors “in case anyone in my family or I get sick” (p. 50). He cites flaws in popular hospital ranking systems, like U.S. News & World Report, and offers tips for how to find a good doctor for a particular condition, like checking publications in Google Scholar and looking for senior authors of highly-cited papers. He writes:

“The heterogeneity of the quality of care is not adequately appreciated, and all too often consumers accept the convenient, easy alternative…If this involves a physician or surgeon who does procedures or operations, it is essential to ask for the exact number of procedures performed per year and cumulatively over his or her career…” (pp. 52-53).

The point here is that physicians are not machines. Some are more capable than others, and the quality of care received depends on the doctor’s training, experience and other human qualities.

Another gem, in Chapter 11, pertains to the “science of individuality.” We’re at a threshold, Topol says, of eliminating ignorance in medicine. For doctors and informed patients who happen upon this review: idiopathic, essential and cryptogenic diseases will be gone. Instead, we’ll have conditions defined molecularly or, even if not understood, rooted in the concept of N=1. He writes:

…a new body of data that can be derived from any individual, both at baseline and after an intervention……This opportunity leverages the immense molecular biological, physiologic, and anatomic data that can be determined for any individual, and reinforces that the ultimate goal of an intervention is to have a markedly favorable impact on each n-of-1, rather than the current model, which emphasizes population medicine with the relatively small chance that any individual may derive benefit.

What he’s saying is that the more quickly and inexpensively we can gather and process details about a patient’s medical condition, the more cleverly we can apply treatments designed to help, even in the absence of large trials.

I love this idea.

Related Posts:

New Article on Mammography Spawns False Hope That Breast Cancer is Not a Dangerous Disease

This week’s stir comes from the Annals of Internal Medicine. In a new analysis, researchers applied complex models to cancer screening and BC case data in Norway. They estimated how many women found to have invasive breast cancer are “overdiagnosed.” I cannot fathom why the editors of the Annals gave platform to such a convoluted and misleading medical report as Overdiagnosis of Invasive Breast Cancer Due to Mammography Screening: Results From the Norwegian Screening Program. But they did.

Here are a few of my concerns:

1. None of the four authors is an oncologist.

2. The researchers use mathematical arguments so complex to prove a point that Einstein would certainly, 100%, without a doubt, take issue with their model and proof.

3. “Overdiagnosis” is not defined in any clinical sense (such as the finding of a tumor in a woman that’s benign and doesn’t need treatment). Here, from the paper’s abstract:

The percentage of overdiagnosis was calculated by accounting for the expected decrease in incidence following cessation of screening after age 69 years (approach 1) and by comparing incidence in the current screening group with incidence among women 2 and 5 years older in the historical screening groups, accounting for average lead time (approach 2).

No joke: this is how “overdiagnosis” – the primary outcome of the study, is explained. After reading the paper in its entirety three times, I cannot find any better definition of overdiagnosis within the full text. Based on these manipulations, the researchers “find” an estimated rate of overdiagnosis attributable to mammography between 18 -25% by one method (model/approach 1) or 15-20% (model/approach 2).

4. The study includes a significant cohort of women between the ages of 70-79. Indolent tumors are more common in older women who, also, are more likely to die of other causes by virtue of their age. The analysis does not include women younger than 50 in its constructs.

5. My biggest concern is how this paper was broadcast – which, firstly, was too much.

Bloomberg News takes away this simple message in a headline:  “Breast Cancer Screening May Overdiagnose by Up to 25%.” Or, from the Boston Globe’s Daily Dose, “Mammograms may overdiagnose up to 1 in 4 breast cancers, Harvard study finds.” (Did they all get the same memo?)

The Washington Post’s Checkup offers some details: “Through complicated calculations, the researchers determined that between 15 percent and 25 percent of those diagnoses fell into the category of overdiagnosis — the detection of tumors that would have done no harm had they gone undetected.” But then the Post blows it with this commentary, a few paragraphs down:

The problem is that nobody yet knows how to predict which cancers can be left untreated and which will prove fatal if untreated. So for now the only viable approach is to regard all breast cancers as potentially fatal and treat them with surgery, radiation, chemotherapy or a combination of approaches, none of them pleasant options…

This is simply not true. Any pathologist or oncologist or breast cancer surgeon worth his or her education could tell you that not all breast cancers are the same. There’s a spectrum of disease. Some cases warrant more treatment than others, and some merit distinct forms of treatment, like Herceptin, or estrogen modulators, surgery alone…Very few forms of invasive breast cancer warrant no treatment unless the patient is so old that she is likely to die first of another condition, or the patient prefers to die of the disease. When and if they do arise, slow-growing subtypes should be evident to any well-trained, modern pathologist.

“Mammograms Spot Cancers That May Not Be Dangerous,” said WebMD, yesterday. This is feel-good news, and largely wishful.

A dangerous message, IMO.

Addendum, 4/15/12: The abstract of the Annals paper includes a definition of “overdiagnosis” that is absent in the body of the report: “…defined as the percentage of cases of cancer that would not have become clinically apparent in a woman’s lifetime without screening…” I acknowledge this is helpful, in understanding the study’s purpose. But this explanation does not clarify the study’s findings, which are abstract. The paper does not count or otherwise directly measure any clinical cases in which women’s tumors either didn’t grow or waned. It’s just a calculation. – ES

Related Posts:

What Does it Mean if Primary Care Doctors Get the Answers Wrong About Screening Stats?

Last week the Annals of Internal Medicine published a new report on how doctors (don’t) understand cancer screening stats. This unusual paper reveals that some primary care physicians – a majority of those who completed a survey – don’t really get the numbers on cancer incidence, 5-year survival and mortality.

An accompanying editorial by Dr. Virginia Moyer, a Professor of Pediatrics and current Chair of the USPSTF, drives two messages in her title, What We Don’t Know Can Hurt Our Patients: Physician Innumeracy and Overuse of Screening Tests. Dr. Moyer is right, to a point. Because if doctors who counsel patients on screening don’t know what they’re speaking of, they may provide misinformation and cause harm. But she overstates the study’s implications by emphasizing the “overuse of screening tests.”

The report shows, plainly and painfully, that too many doctors are confused and even ignorant of some statistical concepts. Nothing more, nothing less. The new findings have no bearing on whether or not cancer screening is cost-effective or life-saving.

What the study does suggest is that med school math requirements should be upped and rigorous, counter to the trend. And that we should do a better job educating students and reminding doctors about relevant concepts including lead-time bias, overdiagnosis and – as highlighted in two valuable blogs just yesterday, NPR Shots and Reporting on Health Antidote – the Number Needed to Treat, or NNT.

The Annals paper has yielded at least two unfortunate outcomes. One, which there’s no way to get around, is the clear admission of doctors’ confusion. In the long term, this may be a good thing, like admitting a medical error and then having QA improve as a consequence. But meanwhile some doctors at their office desks and lecterns don’t realize what they don’t know, and there’s no clear remedy in sight.

Dr. Moyer, in her editorial, writes that medical journal editors should carefully monitor reports to ensure that results aren’t likely misinterpreted. She says, in just one half-sentence, that medical educators should improve teaching on this topic. And then she directs the task of stats-ed to media and journalists, who, she advises, might follow the lead of the “watchdog” HealthNewsReview. I don’t see that as a solution, although I agree that journalists should know as much as possible about statistics and limits of data about which they report.

The main problem elucidated in this article is a failure in medical education. The cat’s out of the bag now. The WSJ Heath Blog covered the story. Most doctors are baffled, says Fox News. On its home page, the Dartmouth Institute for Health Policy & Clinical Practice links to a Reuters article that’s landed on the NIH/NLM-sponsored MedlinePlus (accessed 3/15/12). This embarrassment  further compromises individuals’ confidence in doctors they would and sometimes need rely on.

We lie, we cheat, we steal, we are confused… What else can doctors do wrong?

The second, and I think unnecessary, problematic outcome of this report is that it’s been used to argue against cancer screening. In the editorial Dr. Moyer indulges an ill-supported statement:

…several analyses have demonstrated that the vast majority of women with screen-detected breast cancer have not had their lives saved by screening, but rather have been diagnosed early with no change in outcome or have been overdiagnosed.

The problem of overdiagnosis, which comes up a lot in the paper, is over-emphasized, at least as it relates to breast cancer, colon cancer and some other tumors. I  have never seen a case of vanishing invasive breast cancer. In younger women, low-grade invasive tumors are relatively rare. So overdiagnosis isn’t applicable in BC, at least for women who are not elderly.

In the second paragraph Dr. Moyer outlines, in an unusual mode for the Annals, a cabal-like screening lobby:

 …powerful nonmedical forces may also lead to enthusiasm for screening, including financial interests from companies that make tests or testing equipment or sell products to treat the conditions diagnosed and more subtle financial pressures from the clinicians whose daily work is to diagnose or treat a condition. If fewer people are diagnosed with a disease, advocacy groups stand to lose contributions and academics who study the disease may lose funding. Politicians may wish to appear responsive to powerful special interests…

While she may be right, that there are some influential and self-serving interests and corporations who push aggressively, and maybe too aggressively for cancer screening, it may also be that some forms of cancer screening are indeed life-saving tools that should be valued by our society. I think, also, that she goes too far in insinuating that major advocacy groups push for screening because they stand to lose funding.

I’ve met many cancer agency workers, some founders, some full-time, paid and volunteer helpers – with varied priorities and goals – and I honestly believe that each and every one of those individuals hopes that the problem of cancer killing so many non-elderly individuals in our society will go away. It’s beyond reason to suggest there’s a hidden agenda at any of the major cancer agencies to “keep cancer going.” There are plenty of other worthy causes to which they might give their time and other resources, like education, to name one.

Which leads me back to the original paper, on doctors’ limited knowledge –

As I read the original paper the first time, I considered what would happen if you tested 412 practicing primary care physicians about hepatitis C screening, strains, and whether or not there’s a benefit to early detection and treatment of that common and sometimes pathologic virus, or about the use of aspirin in adults with high blood pressure and other risk factors for heart disease, or about the risks and benefits of drugs that lower cholesterol.

It seems highly unlikely that physicians’ uncertainty is limited to conceptual aspects of cancer screening stats. Knowing that, you’d have to wonder why the authors did this research, and why the editorial pushes so hard the message of over-screening.

Related Posts:

Counterfeit Drugs, A New Concern for Patients

This week the FDA issued an alert about fake Avastin. The real drug is a Genentech-manufactured monoclonal antibody prescribed to some cancer patients. Counterfeit vials were sold and distributed to more than a dozen offices and medical treatment facilities in the U.S. This event, which seems to have affected a small number of patients and practices, should sound a big alarm.

Even the most empowered patient – one who’s read up on his drug regimen, and engaged with his physician about what and how much he wants to receive, and visited several doctors for second opinions and went on-line to discuss treatment options with other patients and possibly some experts – can’t know, for sure, exactly what’s in the bag attached to his IV pole.

Counterfeit Avastin (images from FDA)

Scary because patients are so vulnerable –

The problem is this. If you’re sick and really need care, at some point you have to trust that what you’re getting, whether it’s a dose of an antibiotic, or a hit of radiation to a bone met, or a drug thinner, is what it’s supposed to be. If vials are mislabeled, or machines wrongly calibrated, the error might be impossible to detect until side effects appear. If you’re getting a hoax of a cancer drug in combination with other chemo, and it might or might not work in your case, and its side effects – typically affecting just a small percent of recipients – are in a black box, it could be really hard to know you’re not getting the right stuff.

What this means for providers is that your patients are counting on you to dot the i’s. Be careful. Know your sources. Triple-check everything.

The bigger picture – and this falls into a pattern of a profit motive interfering with good care – is that pharmacists and doctors and nurses need time to do their work carefully. They need to get rest, so that they’re not working robotically, and so that they don’t assume that someone else has already checked what they haven’t. And whoever is buying medications or supplies for a medical center, let’s hope they’re not cutting shady deals.

This issue may be broader than is known, now. The ongoing chemo shortage might make a practice “hungry” for drugs. And with so many uninsured, some patients may seek treatments from less-than-reputable infusion givers. The black market, presumably, includes drugs besides Avastin.

If I were receiving an infusion today, like chemo or anesthesia or an infusion of an antibody for Crohn’s disease, I’d worry a little bit extra. I mean, who will check every single vial and label and box? Think of the average hospital patient, and how much stuff they receive in an ordinary day – including IV fluids that might be contaminated with bacteria.

It’s scary because of the loss of control. This circumstance might be inherent to being a patient – in being a true patient and not a “consumer.”

Related Posts:

Cyberchondria Rising – What is the Term’s Meaning and History?

Yesterday the AMA news informed me that cyberchondria is on the rise. So it’s a good moment to consider the term’s meaning and history.

Cyberchondria is an unfounded health concern that develops upon searching the Internet for information about symptoms or a disease. A cyberchondriac is someone who surfs the Web about a medical problem and worries about it unduly.

Through Wikipedia, I located what might be the first reference to cyberchondria in a medical journal: a 2003 article in the Journal of Neurology, Neurosurgery, and Psychiatry. A section on the new diagnosis starts like this: “Although not yet in the Oxford English Dictionary, the word ‘cyberchondria’ has been coined to describe the excessive use of internet health sites to fuel health anxiety.” That academic report links back to a 2001 story in the Independent, “Are you a Cyberchondriac?”

Two Microsoft researchers, Ryen White and Eric Horvitz, authored a “classic” paper: Cyberchondria: Studies of the Escalation of Medical Concerns in Web Search. This academic paper, published in 2009, reviews the history of cyberchondria and results of a survey on Internet searches and anxiety.

Interesting that the term – coined in a newspaper story and evaluated largely by IT experts – has entered the medical lexicon. I wonder how the American Psychiatry Association will handle cyberchondria in the upcoming DSM-5.

 

Related Posts:

NEJM Reports on 2 New Drugs for Hepatitis C

Last week’s NEJM delivered an intriguing, imperfect article on a new approach to treating hepatitis C (HCV). The paper’s careful title, Preliminary Study of Two Antiviral Agents for Hepatitis C Genotype 1, seems right. The analysis, with 17 authors listed, traces the response of 21 people with hepatitis C (HCV) who got two new anti-viral agents, with or without older drugs, in a clinical trial sponsored by Bristol-Meyers Squibb.

The 21 study participants all had chronic infection by HCV genotype 1, a strain that’s common in North America and relatively resistant to standard treatment. All subjects were between 18 and 70 years old, with a measurable level of HCV RNA in the blood, no evidence of cirrhosis, and no response to prior HCV treatment (according to criteria detailed in the paper). In the trial, 11 patients received a combination regimen of daclatasvir (60 mg once daily, by mouth) and asunaprevir (600 mg, twice daily by mouth) alone; the other 10 patients took the experimental drugs along with 2 older meds for HCV – Peginterferon (Pegasys, an injectible drug by Roche) and Ribavirin (Copegus, a pill, by Roche).

The main finding is that the 10 patients assigned to take 4 drugs all did strikingly well in terms of reducing detectable HCV in their blood over the course of 24 weeks. There was a dramatic response, also, in 4 of the 11 patients assigned to the new drugs only. An accompanying editorial highlighted the work as a Watershed Moment in the Treatment of Hepatitis C. The medical significance is that they’ve demonstrated proof of principle: by “hitting” a resistant HCV strain with multiple anti-viral drugs simultaneously, they could reduce it to undetectable levels.

The first question you have to ask about this report is why the NEJM – the most selective of medical journals – would publish findings of an exploratory analysis of two new pills paired with two older drugs for HCV. The best answer, probably, is that the virus infects some 4 million people in the U.S. and approximately 180 million people worldwide, according to the study authors. HCV can cause liver damage, cirrhosis, liver cancer (which is usually fatal) and, occasionally blood disorders.

The new drugs derive from some interesting science. This, maybe, also is a factor in why the article was published in the NEJMDaclatasvir (BMS-790052) blocks a viral protein, NS5A, that’s essential for HCV replication. The second new drug, asunaprevir (BMS-650032) inhibits a viral protease, NS3.

I have several concerns about this report. One is that the researchers screened 56 patients for possible registration but enrolled only 21 on the trial; according to a supplementary Figure 1, 35 potential subjects (over half) didn’t meet criteria for eligibility. This disparity makes any once-researcher wonder about bias in selecting patients for enrollment. If you’re a pharmaceutical company and want to show a new drug or combo is safe, you’re going to pick patients for a trial who are least likely to experience or display significant toxicity.

Toxicity seems like it could be problematic. Diarrhea, fatigue and headaches were common among the study subjects. Worrisome is that 6 patients (of 21, that would be 28.5% of those on the trial) had liver problems manifest by at least one enzyme (the ALT) rising over 3 times the normal limit.

Further complicating the picture is there’s no indication of how these new drugs mesh with the two drugs approved for HCV in 2011: Vic­trelis (boceprevir) and Incivek (telaprevir).

Given all these limitations, you might wonder about BMS’s influence at the Journal or, more likely, the manuscript’s peer reviewers. The 17 study authors, and the editorialist, separately, disclose a host of industry ties.

What I’m thinking, as much as I’m critical of this research work, is that this is probably the way of the future – smaller, pharma-funded studies of targeted new drugs in complicated combinations. Many will be authored by academics with ties to industry, if not put forth directly by company-employed researchers. These quick-and-promising studies in select patient groups will be routine. And while advocates push for rapid publication of new clinical research in patients with resistant, disabling diseases, it’ll be hard for physicians and patients to interpret these kinds of data.

So these particular findings may turn out to be true and life-saving, or not. The bigger concern is this: It would be helpful if the journals would take a really tough stance on full disclosure of authors and editors ties to industry. As Merrill Goozner has emphasized, the Physician Payment Sunshine Act – a small component of the 2010 HCR legislation – has important implications for academic medicine and reporting of clinical research studies.

Related Posts:

What is the Disease Control Rate in Oncology?

Last week I came upon a new term in the cancer literature: the Disease Control Rate. The DCR refers to the total proportion of patients who demonstrate a response to treatment.

In oncology terms: The DCR is the sum of complete responses (CR) + partial responses (PR) + stable disease (SD).

Another way of explaining it: Some people with cancer have measurable, growing tumors. For example, a man might have a sarcoma with multiple metastases in the lung that are evidently progressing. If the patient starts a new treatment and the lung mets don’t shrink but stop getting bigger, that might be considered a stabilizing effect from the therapy, and his response would be included in the DCR.

Related Posts:

Breast Cancer Stats: Notes from the 2012 ACS Report, and a Key Question

Earlier this month, the ACS released its annual report on Cancer Facts and Figures. The document, based largely on analyses of SEER data from the NCI, supports that approximately 229,000 adults in the U.S. will receive a diagnosis of invasive breast cancer (BC) this year. The disease affects just over 2,000 men annually; 99% of cases arise in women. Non-invasive, aka in situ or Stage 0 BC, including DCIS, will be found in approximately 63,000 individuals.

The slightly encouraging news is that BC mortality continues to decline. This year, the number of expected deaths from BC is just under 40,000. From the ACS document: “Steady declines in breast cancer mortality among women since 1990 have been attributed to a combination of early detection and improvements in treatment.”

Survival data, from the report:

For all women diagnosed with BC, the 5-year relative survival rate has risen from 63% in the 1960s to 90% today. At 10 years, for women of all stages combined, the relative survival is 82% and at 15 years, 77%. Traditional staging still matters: For women with localized BC (that has not spread to glands or elsewhere outside of the breast), the 5-year relative survival at 5 years is 99%. For women with lymph node involvement, 5-year relative survival is 84%.

For those with metastatic disease, 5-year relative survival is 23%. The report cautions: these “stats don’t reflect recent advances in detection and treatment. For example, 15-year relative survival is based on patients diagnosed as early as 1990.”

Since 1990, we’ve seen testing and widespread use of (no longer) new drugs like Herceptin, taxane-type chemotherapies, aromatase inhibitors and other meds in women with MBC. In addition, it’s possible that better palliative care and supportive strategies, along with more effective treatments for infectious and other complications, may have extended survival.

What we’ve got to ask, and about which data are remarkably elusive, is this: What is the median survival for women with metastatic BC (MBC) in 2012?

Your author has spoken with several leading, national authorities on the subject, and no one has provided a clear answer. The reason for this informational hole is that SEER data includes the incidence of new cases at each stage, and mortality from the disease, but does not include numbers on stage conversion – when a woman who had early-stage disease relapses with Stage IV (MBC). There’s astonishingly little current data about on how long women live, on average, after relapsing.

20 years ago, oncology fellows learned that the median survival of women with MBC was around 3 years. Now, that is pretty much still what doctors tell patients, but there’s a sense that the picture is no longer so bleak. Much of what we know about survival of women with MBC comes from clinical trials of patients with particular subtypes (e.g. Her2+ or negative disease). That information, on subtypes and responsiveness to particular drugs, is crucial. But we also need to know the big picture, i.e. exactly – give or take a few thousand women – how many are alive now with MBC?

This information might inform research funding, planning of medical and social services, besides understanding the course of the illness and extensiveness of this problem. And if survival has indeed improved, that measurement, straightforward as it should be, might offer hope to those living with the disease, today.

Related Posts:

Weight Loss Strategies – What Should Doctors Say to Patients?

Yesterday’s Times offered two distinct perspectives on weight loss. One, a detailed feature on gastric surgery by Anemona Hartocollis, details the plight of a young obese woman who opts for Lap-band surgery. In this procedure, surgeons wrap a constricting band of silicone around the stomach so that patients will feel full upon eating less food than they might otherwise. Allergan, the company that manufactures the device, admits to these complications on its website.

The other, a discussion of resolutions and will-power by John Tierney, considers strategies for sticking to diets, exercise regimens and other good intentions for the new year. Within this piece lies a distracting story of an obese (375 pound) hedge fund manager whose gastric band failed to keep his appetite in check. When he landed a project in Las Vegas and feared regaining weight, he aimed high – to lose 100 pounds, outfitted his hotel suite with a gym, and hired a personal trainer to stay nearby and keep him on track in terms of meals and exercise. This costly “outsourcing” of will-power is, obviously, not an option for most people.

Tierney does offer some reasonable suggestions – like setting realistic goals, weighing yourself daily, Tweeting your weight, logging into a weight-loss website, not freaking out if you blow your diet one day, etc.

Both articles are well-worth reading.

But here’s the thing – how do doctors fit into this picture? In the last few years that I was practicing hematology, I saw a few patients who had B12 deficiency after gastric bypass surgery. These patients turned out to have multiple problems after their stomachs were cut so they’d eat less food. For some it was helpful; I saw individuals who lost over 150 pounds. Still, the surgery was huge and risky. I can’t fathom having recommended it to a patient whom I cared for, unless perhaps I’d personally witnessed her struggling to lose weight for over, say, 8-10 years.

Because most people, if inspired or starved, can lose weight. This may sound cruel, but what if the doctors recommending the procedure don’t have sufficient confidence in their patients?

The Lap-band is sold as a safer alternative, but upon reading the story (an anecdote, but telling), you have to wonder what are patients’ expectations of the procedure, and how well do they understand the likely risks and benefits. Who are the doctors who tell them about the procedures, and what are their ties with industry (besides the obvious link of surgeons who do the surgery and recommend it).

Like patients with cancer, patients with obesity may feel desperate. But unlike cancer, obesity is almost always a function of choices we make, and for which I think we have to hold people responsible.

Doctors, maybe, should expect more of their patients. “Yes, you can lose 30 pounds over the next 2 years,” one might say. And they might talk about strategies, Tierney-style or otherwise, based on the patient’s preferences and personality. “Come into my office once each month for a weigh-in” might be very effective in persuading patients to shed pounds. A technician could do the monthly measurement in the office or medical home, and the doctor or nurse might follow-up with an encouraging email. Imagine that!

So why don’t more general practitioners, including pediatricians, offer this sort of weight-loss approach? Is it too simple a strategy that doctors don’t find it interesting? Or not sufficiently profitable for the office or medical center?

No answers, just thoughts upon reading, for today –

Related Posts:

A Note on ‘Trial by Twitter’ and Peer Review in 2012

Nature just published a feature: Trial by Twitter. The piece considers the predicament of researchers who may find themselves ill-prepared to deal with a barrage of unsolicited and immediate on-line “reviews” of their published work. The author of the Nature News piece, science journalist A. Mandavilli, does a great job covering the pros and cons of Twitter “comments” on strengths and weaknesses of studies from the perspective of researchers whose work has been published by major journals.

She writes:

Papers are increasingly being taken apart in blogs, on Twitter and on other social media within hours rather than years, and in public, rather than at small conferences or in private conversation.

What I’d add is this:

Openness isn’t just about criticism. It can be a positive factor in bringing to light the work of small-lab researchers whose findings contradict dogma or conflict with heavily-financed work by leaders in a field. Through twitter and blogs, non-mainstream threads of data can gain attention, traction and, with time and merit, grant support.

Scientists who publish in major journals should be able to handle the flak. If their work is correct, it’ll stand through open peer review.

—-

Related Posts:

newsletter software
Get Adobe Flash player