What Does It Mean If a Vaccine Is ‘Successful’?

When representatives from the drug company Pfizer say that they could know as soon as the end of October if their Covid-19 vaccine works, here’s what they mean: If their trial, involving perhaps as many as 44,000 people, pops just 32 of them with mild Covid-19 symptoms and a positive test—and if 26 of those people got a placebo instead of the vaccine—that, potentially, is it. According to the guidelines laid out by the Food and Drug Administration, that would be an “effective” vaccine: 50 percent efficacy with a statistical “confidence interval” that puts brackets around a range from 30 percent to 70 percent. At that point, per Pfizer’s protocol, the company could stop the trial. Technically, that vaccine would be successful.

Now to be fair, nobody, least of all those selfsame Pfizer representatives, is explicitly claiming that will happen—or that if it does, Pfizer would take those numbers to the FDA and ask to start giving people shots. “The protocol only specifies that the study would stop in the case of futility, and does not outline a binding obligation to stop the study if efficacy is declared,” a Pfizer spokesperson told me by email. Translation: They have wiggle room to keep going. On the other hand, they could ask for an emergency use authorization, which the FDA and President Donald Trump seem to be angling for—and which could, for various ethical and practical reasons, then become a roadblock in front of all the other trials in progress. It’s hard to tell!

Which is a problem. Now that several pharmaceutical companies have released detailed plans for how they’re testing their Covid-19 vaccine candidates, researchers are asking questions about these protocols. Even if anyone can reliably say whether a particular vaccine works—for various definitions of “works”—it’s less clear that the trials will be able to tell which one works better, and for whom. No one is yet testing vaccines head-to-head. The goal here hasn’t changed: To get one or more vaccines that protect lots of different kinds of people against Covid-19. At issue is how the many candidate vaccine trials are designed, what the trials will actually show, and how the vaccines compare to each other.

Big vaccine trials all depend in part on defining “end points,” the signs of infection or illness that the researchers say they’re going to count. Basically, the setup is: You give tens of thousands of people the vaccine and a few thousand other people a placebo, and you see who gets to those predetermined end points. If more people who got the placebo do—by a mathematically predetermined proportion—you got yourself a vaccine.

The tricky bit is, what really constitutes an end point? Obviously a big one is “infection with the virus SARS-CoV-2.” But after that, reasonable minds could disagree. You could also choose “correlates of immunity,” like antibodies found in a blood test. Or you could use symptoms, as these trials do. That’s common practice. But does it matter if someone gets a little sick, with mild illness like a cough or muscle aches, versus a lot sick, with severe illness that requires a ventilator or an intensive care unit? Pfizer and the other companies with trials underway are using mild symptoms and a positive Covid-19 test as their primary end points, and severe illness as a secondary end point, something for later statistical analysis.

But incidence of mild cases might not be the most useful thing to count. If you’re looking for vaccines meant to eventually reach billions of people, maybe you actually want to first ensure they beat back the most severe symptoms, not the mild ones. “What you’d like, in this very small number of events, going to the planetary population, is to have the most confidence you possibly can. That would be suppressing the worst events, sickness that requires a hospitalization and anything worse than that,” says Eric Topol, a professor of molecular medicine at the Scripps Research Institute who has been watchdogging the trial protocols. Mild, coldlike symptoms, he says, “are not very good signals of efficacy. And my understanding is there was tremendous internal debate about that when these protocols were being discussed, but I think they made a bad decision.”

Leave a Reply