This week the Journal of the American Medical Association, JAMA, held a media briefing on its current, Comparative Effectiveness Research (CER) theme issue. The event took place in the National Press Club. A doctor, upon entering that building, might do a double-take waiting for the elevator, curious that the journalists occupy the 13th floor – what’s absent in some hospitals.
CER is a big deal in medicine now. Dry as it is, it’s an investigative method that any doctor or health care maven, politician contemplating reform or, maybe, a patient would want to know. The gist of CER is that it exploits large data sets – like SEER data or Medicare billing records – to examine outcomes in huge numbers of people who’ve had one or another intervention. An advantage of CER is that results are more likely generalizable, i.e. applicable in the “real world.” A long-standing criticism of randomized trials – held by most doctors, and the FDA, as the gold standard for establishing efficacy of a drug or procedure – is that patients in research studies tend to get better, or at least more meticulous, clinical care.
The JAMA program began with an intro by Dr. Phil Fontanarosa, a senior editor and author of an editorial on CER, followed by 4 presentations. The subjects were, on paper, shockingly dull: on carboplatin and paclitaxel w/ and w/out bevacizumab (Avastin) in older patients with lung cancer; on survival in adults who receive helicopter vs. ground-based EMS service after major trauma; a comparison of side effects and mortality after prostate cancer treatment by 1 of 3 forms of radiation (conformal, IMRT, or proton therapy); and – to cap it off – a presentation on PCORI‘s priorities and research agenda.
I learned from each speaker. They brought life to the topics! Seriously, and the scene made me realize the value of meeting and hearing from the researchers, directly, in person. But, NTW, on ML today we’ll skip over the oncologist’s detailed report to the second story:
Dr. Adil Haider, a trauma surgeon at Johns Hopkins, spoke on helicopter-mediated saves of trauma patients. Totally cool stuff; I’d rate his talk “exotic” – this was as far removed from the kind of work I did on molecular receptors in cancer cells as I’ve ever heard at a medical or journalism meeting of any sort –
Haider indulged the audience, and grabbed my attention, with a bit of history: HEMS, which stands for helicopter-EMS, goes back to the Korean War, like in M*A*S*H. The real-life surgeon-speaker at the JAMA news briefing played a music-replete video showing a person hit by a car and rescued by helicopter. While he and other trauma surgeons see value in HEMS, it’s costly and not necessarily better than GEMS (Ground-EMS). Helicopters tend to draw top nurses, and they deliver patients to Level I or II trauma centers, he said, all of which may favor survival and other, better outcomes after serious injury. Accidents happen; previous studies have questioned the helicopters’ benefit.
The problem is, there’s been no solid randomized trial of HEMS vs. GEMS, nor could there be. (Who’d want to get the slow pick-up with a lesser crew to a local trauma center?) So these investigators did a retrospective cohort study to see what happens when trauma victims 15 years and older are delivered by HEMS or GEMS. They used data from the National Trauma Data Bank (NTDB), which includes nearly 62,000 patients transported by helicopter and over 161,000 patients transported by ground between 2007 and 2009. They selected patients with ISS (Injury severity scores) above 15. They used a “clustering” method to control for differences among trauma centers, and otherwise adjusted for degrees of injury and other confounding variables.
“It’s interesting,” Haider said. “If you look at the unadjusted mortality, the HEMS patients do worse.” But when you control for ISS, you get a 16% increase in odds of survival if you’re taken by helicopter to a Level I trauma center. He referred to Table 3 in the paper. This, indeed, shows a big difference between the “raw” and adjusted data.
In a supplemental video provided by JAMA (starting at 60 seconds in):
When you first look, across the board, you’ll see that actually more patients transported by helicopter, in terms of just the raw percentages, actually die.” – Dr. Samuel Galvagno (DO, PhD), the study’s first author.
The video immediately cuts to the senior author, Haider, who continues:
But when you do an analysis controlling for how severely these patients were injured, the chance of survival improves by about 30 percent, for those patients who are brought by helicopter…
What’s clear is that how investigators adjust or manipulate or clarify or frame or present data – you choose the verb – yields differing results. This capability doesn’t just pertain to data on trauma and helicopters. In many Big Data situations, researchers can cut information to impress whatever point they choose.
The report offers a case study of how researchers can use elaborate statistical methods to support a clinical decision in a way that few doctors who read the results are in a position to grasp, to know if the conclusions are valid, or not.
A concluding note –
I appreciated the time allotted for Q&A after the first 3 research presentations. There’s been recent, legitimate questioning of the value of medical conferences. This week’s session, sponsored by JAMA, reinforced to me the value of meeting study authors in person, and having the opportunity to question them about their findings. This is crucial, I know this from my prior experience in cancer research, when I didn’t ask enough hard questions of some colleagues, in public. For the future, at places like TEDMED – where I’ve heard there was no attempt to allow for Q&A – the audience’s concerns can reveal problems in theories, published data and, constructively, help researchers fill in those gaps, ultimately to bring better-quality information, from any sort of study, to light.