Speaking for myself, I’m not always looking for information. I’m often looking for reassurance. And that can be hard to find. It can be tough to tease out hopeful trends from scary statistics.
The first otolaryngologist we consulted couldn’t identify David’s tumor before it was biopsied. That doctor had several ideas, though, of what it might be. A friend who accompanied us to the consult -- a cancer survivor herself -- advised, “Now don’t go home and start googling all of this.”
The doctor said he didn't have a problem with patients looking up stuff on the Internet. He just asked that we call him if we saw something that worried us “because so much doesn't apply to every case.”
Excellent advice. I should’ve taken it as a warning.
Do you have a science or medical background to help you keep results of clinical trials in perspective? Do you have a good foundation in statistics that allows you to interpret the numbers? If not, reading cancer studies likely will make you pull out your hair. Of course, if you’re the patient, your hair will fall out anyway, but I still advise that you limit the reading if you can.
Unfortunately, I can’t. I’ve googled “rhabdomyosarcoma” every way possible, linking it with “symptoms,” “prognosis,” “treatment,” “five-year survival,” “recurrence,” every chemotherapy drug known to humankind, and even “death.” I think after some of my online sessions, I feel sicker than David does after a dose of irinotecan.
By now I’ve looked at hundreds of studies. And pretty much all I’ve learned is that I don’t understand what I’m reading. Undeterred, I make long lists of what I assume to be relevant questions for David Loeb, our oncologist. He is so patient. Time after time he’s come back explaining that the study doesn’t say what I think it says and noting why the results don’t apply to David’s case anyway.
A top biostatistician and physician who eats clinical trials for breakfast spoke to our Medicine in Action class last year. Over a
2 ½-hour period, he delivered a Power Point tutorial on how to evaluate scientific studies.
He spelled out many reasons why the results of a particular study might be suspect or even completely invalid. Maybe there were too few participants, making the study cohort too small. Maybe the control group wasn’t really comparable to the patient group. Maybe there weren’t enough women in the study. Maybe the study wasn’t randomized, meaning patients weren’t randomly assigned to different arms of the study -- the most authoritative type. Or worse, maybe a drug company funded the research and the findings support its newly developed medication a little too much.
Those of us in the class listened attentively and took pages of notes. By the time he'd finished, our brains were numb.
At the close of our initial consult with Dr. Loeb and four other doctors in May 2008, he said, “We cure better than two thirds of patients with intermediate-risk rhabdo,” which was David’s group. Dr. Loeb looked very pleased. I remember thinking, “Why is he smiling? I’d feel much better if the number was three thirds.”
A year later, when it was confirmed that David had refractory disease -- meaning the tumor was still active despite standard treatment -- the two-thirds number slipped. Worried, I started reading studies again, looking for new percentages. A particularly disturbing statistic caused me to e-mail Dr. Loeb. He patiently explained (again) that I’d misinterpreted the number and wrote,
“Never EVER look at survival statistics again. EVER. Statistics are meaningless for the individual.”
Have I stopped? Not really, although I don’t let the numbers alarm me as much. And nothing will stop me from searching for that one study that includes a patient identical to David, who was completely cured and went on to live another 100 years.
Meanwhile, we're putting our energies into finding a treatment that will work for David and making sure he gets to live the life he wants until then. Our goal is to not let statistics confound us. Our hope is to confound the statistics.
© 2010 by Lorin D. Buck