In the last entry of Prism’s Living History series, I wrote about the influence that Bill Wimsatt’s work on scientific heuristics had on my thinking—and by extension, the influence it continues to have on what we do at Prism. At the end of that entry, I alluded to the next chapter in this story involving an application of my meta-heuristic framework to an antibiotic drug development program in collaboration with researchers at the U.S. Centers for Disease Control and Prevention (CDC).
However, I had forgotten that I already described a fair bit about this case in an earlier Prism Academy post! So rather than re-tread the same ground, for this entry, I decided to do something a little bit different: I reached out to my CDC collaborator and co-author from that time, Chad Heilig, who is now an Associate Director for Data Science at CDC. Chad graciously agreed to let me interview him about our circa 2010 collaboration and his perspective on its importance. What follows is an excerpt of this conversation. (Disclaimer: Chad's views are his own and do not necessarily reflect the official position of CDC.)
Spencer Phillips Hey (SPH): Thanks so much, Chad, for making the time to speak with me. I’d like to start with your account of the events that led to our 2013 publication of the meta-heuristic visualization model for research programs (a.k.a., the AERO graph).
Chad Heilig (CH): Sure. I think you and I met around the Fall of 2009 (at Western University in London, Ontario). I was then heavily involved with the Tuberculosis Clinical Trials Consortium (TBTC), and some of the questions at the top of mind for me stemmed from the fact that the CDC had just completed two phase 2B trials involving moxifloxacin that showed little or no benefit from adding the drug. But at the same time, there were two or three other phase 2 trials in the tuberculosis world that had also used moxifloxacin and found it to be beneficial.
So there were these questions: Why wasn’t the signal with moxifloxacin as strong as we hoped it would be? (Particularly since the results from animal experiments had been very promising.) Why did the CDC-sponsored trials show different results from the other trials?
I know at that time, you were still in the midst of writing your dissertation, which was dealing with concepts like heuristics and robustness. As we started discussing the moxifloxacin case together, you brought some thinking and assumptions to clinical trials that I tried to help you reframe and refine in much the same way that I think you tried to help me reframe and refine my thinking about philosophy of science.
And so somewhere in there, I think, is where you were working on a way to represent this set of really disparate information. Because you had this meta-heuristic schema, you were able to see that each of the studies in a research program can tell us a little bit of something about how moxifloxacin might be expected to act against tuberculosis, but none of them was definitive. And the AERO graph idea is to provide a more systematic way of framing that state of knowledge.
SPH: Yes, that’s exactly right! And you may remember that in the original AERO graph paper, although we are discussing a phase 3 trial with moxifloxacin as one possible next step, there was, in fact, already a phase 3 trial underway—the REMoxTB trial. As it turns out, that trial was “negative” in the sense that a moxifloxacin substitution into the anti-TB regimen did not shorten the treatment time. So if we updated our visualization for this research program, we would see this mix of red and green bubbles in phase 2 (corresponding to the “negative” and “positive” trials, respectively) leading to a big red bubble in phase 3. I’ve argued that the temptation may be to think that REMoxTB was a mistake, but that’s actually not the right interpretation. In fact, there are even ethical reasons why some proportion of phase 3 trials must be negative! At the same time, from speaking with researchers in industry, I know that a negative phase 3 trial (particularly if it’s the first phase 3 with a new treatment or regimen) can be deflating for the sponsoring company. But I’d love to hear your take on that. What do you make of the REMoxTB outcome? Or how would you think about this kind of scenario where, despite quite mixed evidence at phase 2, the program pushes forward to a disappointing result in phase 3?
CH: Well, there are some considerations that I think lead the tuberculosis research world to see themselves differently than other parts of medical research. They see themselves closer to orphan drug research than to cutting edge. For example, Bayer (moxifloxacin’s manufacturer) and Sanofi (manufacturer of rifapentine, another anti-TB drug) were both strong and high-integrity partner for the tuberculosis research world. And by “high-integrity,” I mean that I don't think they ever pressured researchers to try try to find something favorable to their product. I mean, certainly they would be excited, but they made a commitment early on favoring access to effective TB medicine over primary financial benefit.
So what happens when you see a result like REMox-TB? That can be taken up in a different way than what happens if you have disappointing results in a more commercially driven sector of medical research. And I think that's a good thing if these kinds of broader, “cultural” factors (e.g., Sanofi’s Access to Medicines program) are brought to bear on the ultimate decision-making about what next steps to take. In the moxifloxacin case, there was a continued drive in the research community to move forward, and there was strong support from the sponsor. But the sponsor kind of stood back and let the investigators (with substantial statistical input from me) design what they believed was the best study.
More generally, the “answer” about what to do next is not straightforwardly statistics. As the AERO graph suggests, an answer will involve extracting useful pieces of information from a body of disparate undertakings. But going from there to conceive of something that could be called a “scientific strategy”… that’s an even harder problem.
SPH: Let’s follow that last point about a “scientific strategy”. It seems to me that one way of understanding the AERO graph is a systematic way of building a scientific map of the evidence landscape. And if we can also understand research programs as an exploration of this landscape, then it seems reckless to go exploring without a map. Yet, even after we showed your colleagues at the CDC how they could apply the AERO graph approach, which (arguably) should be a critical tool for understanding and communicating their scientific strategy, there just didn’t seem to be the appetite for it. Do you have any thoughts about why that is?
CH: I’m speculating heavily here, but I think TBTC would say that they were intentional and strategic, even if they couldn't articulate it as straightforwardly as the AERO graph could. Could they have been more systematic about it? For sure.
But I think one of the tensions here is that right thinking does not necessarily lead to right action. And if I think that I already know what the visualization is saying, then why would my action be any different?
Another tension is that sometimes the thinking in a field can become too obvious. Like, so obvious that it's hard to convince me that you're saying something I don't already know. And because I already think I know it, I don't have an incentive to reflect on how you reframed it for me.
Of course, that is one of the primary purposes of data visualization in general—to provoke novel reflection about data. This is true for the static kind of data viz, which I tend to focus on in my work, but even more so for the dynamic kind like Prism is offering.
If you have a way to help a decision-maker to sort of look outside themselves and critically reflect on what they do and don't know… and how this particular representation of knowledge that they think they already know can help them get a slightly different grasp of the world… that would be huge!
Thank you for reading! I’ll be following up this entry with another thread from this conversation relating to Chad’s views on data science—what it is, what it isn’t, and what are the principles that should guide the healthy practice and culture of data science.
Also stay tuned for the next chapter in the Living History—showing how applications of the AERO graph expanded from this one analysis of the go/no-go decision to analyzing entire trial portfolios in cancer, cardiology, precision medicine, and more.