TORONTO - Not all patient drug trials published in even the most prestigious of medical journals can be taken as gospel, say researchers, who have found a high proportion of "spin and bias" in the reporting of results.
Researchers at Princess Margaret Cancer Centre reviewed 164 breast cancer trials. They found that in studies that reported no real benefit of treatment, a large proportion focused on less important outcomes to give a more positive spin to results.
Of 92 trials that had negative primary outcomes, about 60 per cent used some secondary measures, "often trying to make the study look positive, though it really was not," said medical oncologist Dr. Ian Tannock, who led the study published this week in the Annals of Oncology.
"Sometimes studies that are basically negative studies are a little bit dressed up to look as though they may be positive," Tannock said Wednesday. "It's like the politicians. Trying to make things look better than they are."
The Toronto study focused on what's known as a trial's "abstract," the summary of how the research was done and its major results and side-effects. The key result is the "endpoint" — in this case, whether the drug in question did what it was supposed to do in patients or not.
The 164 studies appeared between 1995 and 2011, many of them in premier medical publications like the Journal of Clinical Oncology, Lancet Oncology and the New England Journal of Medicine. All were randomized control trials of at least 250 patients each, which typically tested a new drug or compared a new drug to an existing medication.
Tannock said some published reports of breast cancer treatment trials also downplayed or under-reported the incidence of toxic side-effects experienced by patients taking a drug being investigated.
In two-thirds of the reports, there was bias in the way adverse effects of the treatment were reported, with more serious side-effects poorly reported, the authors say. That was particularly true for trials that showed a significant benefit of a treatment.
"If there was an improvement in survival or time to the progression of the disease, they often omitted any statement about toxicity, even though in many cases there was a substantial increase in toxicity," Tannock said.
"And, of course, that's important because if you have a new treatment that on average improves survival for patients with metastatic (spreading) cancer by, say, three months and the treatments are fairly equivalent in terms of side-effects, that's an important advance.
"But if (a drug) improves survival but with a major increase in toxicity, I would say that was a very questionable advance."
The study also tried to determine whether misreporting was more likely if a trial was funded by the pharmaceutical company whose drug was being tested. But Tannock said "we didn't find that, at least not at a statistically significant level."
However, commercial sponsorship of trials isn't always disclosed in reports of results, he said.
Both pharmaceutical companies looking to sell new under-patent drugs — often with higher price tags than older off-patent generic agents — and researchers hoping for a successful trial outcome may both be at fault, he said.
"There is a subtle pressure on academic investigators to publish studies that will be noticed."
Tannock said it's critical that summaries of published clinical trials accurately report how efficacious a drug is in treating breast cancer, in this case, and laying out the severity of side-effects patients might have to endure with the treatment.
Often, the summary is all that busy doctors have time to read carefully, he said.
"Some physicians may be persuaded to use a new or different treatment than the standard because they have read papers that suggest that this is a better treatment when it may not really be a better treatment," he said.