Saturday, January 23, 2010

Confounding statistics

Almost everyone who receives a cancer diagnosis -- or has a loved one who’s been diagnosed -- sooner or later goes online to learn about the disease. Quite often, we scare the pants off ourselves.




Speaking for myself, I’m not always looking for information. I’m often looking for reassurance. And that can be hard to find. It can be tough to tease out hopeful trends from scary statistics.

The first otolaryngologist we consulted couldn’t identify David’s tumor before it was biopsied. That doctor had several ideas, though, of what it might be. A friend who accompanied us to the consult -- a cancer survivor herself -- advised, “Now don’t go home and start googling all of this.”

The doctor said he didn't have a problem with patients looking up stuff on the Internet. He just asked that we call him if we saw something that worried us “because so much doesn't apply to every case.”

Excellent advice. I should’ve taken it as a warning.

Do you have a science or medical background to help you keep results of clinical trials in perspective? Do you have a good foundation in statistics that allows you to interpret the numbers? If not, reading cancer studies likely will make you pull out your hair. Of course, if you’re the patient, your hair will fall out anyway, but I still advise that you limit the reading if you can.

Unfortunately, I can’t. I’ve googled “rhabdomyosarcoma” every way possible, linking it with “symptoms,” “prognosis,” “treatment,” “five-year survival,” “recurrence,” every chemotherapy drug known to humankind, and even “death.” I think after some of my online sessions, I feel sicker than David does after a dose of irinotecan.

By now I’ve looked at hundreds of studies. And pretty much all I’ve learned is that I don’t understand what I’m reading. Undeterred, I make long lists of what I assume to be relevant questions for David Loeb, our oncologist. He is so patient. Time after time he’s come back explaining that the study doesn’t say what I think it says and noting why the results don’t apply to David’s case anyway.

A top biostatistician and physician who eats clinical trials for breakfast spoke to our Medicine in Action class last year. Over a
2 ½-hour period, he delivered a Power Point tutorial on how to evaluate scientific studies.

He spelled out many reasons why the results of a particular study might be suspect or even completely invalid. Maybe there were too few participants, making the study cohort too small. Maybe the control group wasn’t really comparable to the patient group. Maybe there weren’t enough women in the study. Maybe the study wasn’t randomized, meaning patients weren’t randomly assigned to different arms of the study -- the most authoritative type. Or worse, maybe a drug company funded the research and the findings support its newly developed medication a little too much.

Those of us in the class listened attentively and took pages of notes. By the time he'd finished, our brains were numb.

At the close of our initial consult with Dr. Loeb and four other doctors in May 2008, he said, “We cure better than two thirds of patients with intermediate-risk rhabdo,” which was David’s group. Dr. Loeb looked very pleased. I remember thinking, “Why is he smiling? I’d feel much better if the number was three thirds.”

A year later, when it was confirmed that David had refractory disease -- meaning the tumor was still active despite standard treatment -- the two-thirds number slipped. Worried, I started reading studies again, looking for new percentages. A particularly disturbing statistic caused me to e-mail Dr. Loeb. He patiently explained (again) that I’d misinterpreted the number and wrote,

“Never EVER look at survival statistics again. EVER. Statistics are meaningless for the individual.”

Have I stopped? Not really, although I don’t let the numbers alarm me as much. And nothing will stop me from searching for that one study that includes a patient identical to David, who was completely cured and went on to live another 100 years.

Meanwhile, we're putting our energies into finding a treatment that will work for David and making sure he gets to live the life he wants until then. Our goal is to not let statistics confound us. Our hope is to confound the statistics.

© 2010 by Lorin D. Buck

Tuesday, January 19, 2010

The ‘practice’ of medicine


I like trusting the medical community. I like trusting that four years in med school, three or more years as an intern and resident working 24-hour shifts, several years as a fellow engaged in a specialty, and day-to-day experience with all kinds of patients make doctors knowledgeable, skillful and fully capable of curing disease.

Of course, that scenario works in a lot of cases. But not always, especially when it comes to cancer.

My science-medical writing course at Hopkins, Medicine in Action -- where cardiologists to oncologists spoke about the challenges of providing quality health care -- taught me many things. Chief among them:

1.  Avoid hospital admissions if you can.
2.  Avoid all unnecessary surgery.
3.  Remember that medical care is as much an art as a science.

I learned that while it's important to trust your physicians, you can't expect them to know everything. Medicine is still full of mystery.

One of the first jobs a doctor must do when he or she meets a new patient, I learned, is put together a reliable narrative, or story, for that individual. What brought this patient to the hospital or clinic? How does the patient describe his or her symptoms? What is the level of pain, if any? What medications is he or she taking? What can the doctor observe from visible physical symptoms and the patient’s state of mind? Does the patient have a history of illness? What did he or she neglect to mention? All of this requires an ability to listen attentively and ask the right questions.

What do vital signs, blood work, x-rays, scans, biopsies, lumbar punctures and other evaluative tests show? Are the results conclusive? Do they align with what the patient is saying? After the doctor has what seems to be a consistent narrative, he or she can begin treating the patient.

Obviously, some illnesses and injuries are fairly straightforward: an ear infection, a broken bone, an abscess. Others are trickier to assess. Even when the doctor seems to have all the pieces, the problem can still be difficult to diagnosis. And despite knowing what the illness or disease is, it can be exceptionally tough to treat.

It took two biopsies and four scans over six weeks to diagnosis David’s cancer. We knew he had a 6-cm. mass in his sinus cavity; the MRI had told us that much. Still, we went back and forth several times as various otolaryngologists, oncologists and pathologists suggested first that the tumor was benign, then it was malignant, then it was benign again. Ultimately, it was malignant.

Finding a treatment that will lead to cure, even with a favorable prognosis initially, has proved equally challenging.

As much as researchers have learned about cancer, there’s much more to learn. What makes malignant cells start growing in the first place? Why does cancer recur after all indications show it’s been eliminated from the body? When you have two patients with the same disease receiving the same treatment, how come one responds and the other doesn’t?

The scientific answer to many questions about cancer is “we don’t know.”

“We aren’t sure” is the answer I get to some of my questions about why David’s tumor isn’t responding to standard treatment, why half of it is dead but the other half isn’t, and even why he’s experienced few debilitating side effects from the powerful drugs in his system.

I’m OK with that because I know David’s doctors are giving it everything they’ve got and are talking to other experts as they puzzle out his “story.” Meanwhile, it's an exercise in trial and error.

One of our instructors, himself a physician, shared a timeworn saying that’s a favorite among medical practitioners: “There’s a reason it’s called ‘practicing’ medicine. Doctors keep practicing until they get it right.”

© 2010 by Lorin D. Buck