Shudda, Cudda, Wudda: Reevaluating the Treatment Revolution After the Fall
“Six Years of Lost Data”
David Barr revisits the changes to the federal when-to-start recommendations announced earlier this year and urges us to consider both the roots and the ramifications of this extraordinary development. While acknowledging the predicament of a Monday morning quarterback, his words are frank and hard-hitting. “How many people have only suffered from the side effects of treatment — and not from HIV infection?” he asks. And “where is the research infrastructure to study this chronic, if not quite manageable, disease?”
Where is the outcry from AIDS advocates following the recent change in the U.S. government’s adult HIV treatment guidelines? The guidelines panel sponsored by the National Institutes of Health (NIH) changed its recommendation from starting antiretroviral therapy at T cell counts below 500 to waiting to start until T cell counts fall below 350. New British guidelines have gone even further, recommending treatment not start until T cell counts fall below 200.
These changes are based on two central premises: 1) that the development of drug resistance and side effects are leaving individuals with fewer treatment options over time, and 2) that data from multiple (mostly European) cohort studies indicate no significant difference in response to treatment in people starting at 500, 350, or 200 CD4 T cells.
Perhaps jaded AIDS advocates think the guidelines change is not significant. But this change affects the largest group of HIV-infected people in the U.S. — the untreated — and has implications for treatment strategies worldwide. The guidelines are still an important tool for many, if not most, physicians, treatment educators, and patients. While the magnitude of this change may undermine their faith in the guidelines process, what it should really do is question the priorities of the research process.
The change indicates that the original guidelines were wrong. How many people with HIV were hurt because they followed those original guidelines? How many otherwise asymptomatic patients are now multi-drug resistant? How many suffer from irreversible side effects? How many of those people have only suffered from the effects of treatment — not HIV infection? How many HIV-infected individuals wasted the very valuable benefit that HAART offers because they followed guidelines based solely on expert opinion about both drugs and diagnostic technologies untested in clinical practice?
Perhaps it is unfair to criticize mistakes from information viewed in retrospect. Certainly drug resistance and the possibility of long-term side effects were recognized as potential problems in 1996. Yes, one can justify the mistake of prematurely recommending early use of AZT monotherapy in 1989, but by 1996, everyone should have known better.
In 1996, it was clear that there were new, powerful drugs that could radically alter the course of HIV infection. Recommendations could have been made for immediate use of these new therapies for those who needed them most and for whom we actually had efficacy data — people with AIDS. Meanwhile, researchers could have charted a course of research to better understand the long-term and most effective use of these drugs over time, including when to initiate treatment, what combinations to take, how to address treatment failure, and how to recognize and potentially treat side effects.
Instead, the drugs were recommended to the broadest range of patients, making further study more difficult — perhaps impossible. The guidelines panel members who advocated for the most aggressive treatment approach in 1996 were the same people responsible for developing a research agenda to learn about how to use these therapies effectively over the long term. Unfortunately, they were less aggressive in developing their research agenda than they were their treatment recommendations.
The clinical trials networks should have geared up in 1996 to undertake studies of the long-term effects of these treatments. Large, randomized, strategy studies should have been designed. New cohort studies should have been created. But attempts to address these issues by the research establishment were meager. Instead, the clinical trials networks continued to primarily emphasize new drug development and smaller, shorter studies based on viral load and T cell changes. Such studies still have an important role in HIV research, but in 1996, long-term clinical effectiveness research should have become a priority, particularly within government-sponsored research programs. So far, people with HIV have lost six years of important data collection.
In 1999, the National Institute of Allergy and Infectious Diseases (NIAID) restructured its clinical trials networks. Here was a crucial opportunity to develop a research agenda to understand the strategic use of HAART over the long term. NIAID could have used this funding process to create a clinical trials infrastructure expressly designed for such research. That opportunity was wasted.
The ACTG, with a proven record for conducting state-of-the-art studies based primarily on surrogate endpoints, was fully funded. The ACTG was created at a time when there were few treatments for HIV and when drug development was the only real priority. That infrastructure is still important, but it has not met the challenge of developing long-term clinical effectiveness research — nor is it designed to do so. Although its ALERT protocol will follow patients from study to study, ALERT is neither a controlled study to test different strategies of antiretroviral use over long periods of time in large groups of patients, nor is it a comprehensive cohort to study the effects of HAART on HIV infection in heterogeneous health care delivery settings.
The Community Programs for Clinical Research on AIDS (CPCRA) seems genuinely interested in questions of long-term clinical effectiveness, as evidenced by the FIRST, SMART, and long-term monitoring studies that are currently under way. These studies will address important questions about what combinations of drugs to start and what to do when they fail. The CPCRA, however, is too small a network to adequately enroll large, long-term studies — and its record of long-term follow-up is not good. It could have joined forces with the Veterans Administration (VA) and created a research network with the largest pool of HIV-infected individuals ever, but it squandered the opportunity, opting to squabble over turf instead. Rather than figure out how to use effectively the largest provider of HIV care in the nation, NIAID criticized the VA for an admittedly over-ambitious research agenda and refused to fund them. Meanwhile, NIH still has no way of studying what is happening to the more than 25,000 people with HIV getting care through the VA. Thus, NIAID failed to develop the clinical trials network needed to understand the effects of the treatment strategy it was recommending in its guidelines.
Think of the important information learned from the Multicenter AIDS Cohort Study (MACS) over the years. Why wasn’t a new and more diverse cohort started in 1996 to examine the long-term effects of HAART? If HAART turned HIV into a chronic (if not manageable) illness, then where are the research infrastructures to study a chronic disease? The change in the guidelines marks yet another turning point in the roller coaster of HIV treatment. Perhaps it is time someone came up with a way to chart what is clearly going to be a long ride.
Thanks to RITA editor Thomas Gegeny and the Houston Center for AIDS (www.centerforaids.org) for permission to reprint David’s thoughts in this issue of TAGline.