In this month's "Discussing Data Science" episode, I talk with Stephen Rosenfeld, a physician and scientist by training who now runs his own independent institutional review board (IRB) and has a ton of experience in the world of IRBs.
You can watch the video below or on Youtube. But if you'd prefer to read, keep scrolling. The complete transcript (edited for length and clarity) is below.
Spencer Hey (SH): My guest today is Stephen Rosenfeld. Stephen is an executive board chair at Northstar review board, which is a 501c3 non-profit IRB based in Maine. He previously served as the Chief Information Officer at the NIH Clinical Center and Maine Health. He was also the president and CEO of Western IRB and much more.
So if you care about clinical trial data and quality Stephen is a good guy to know! Welcome,
Stephen, and thank you so much for joining me today.
Stephen Rosenfeld (SR): Thank you, Spencer, my pleasure.
SH: Why don't we start with the world of data and IRBS since that's really how you and I first got connected. I'll begin with a little bit of background for our audience: Stephen and I met around 2015 when I was teaching at Harvard. You agreed to come and give some guest lectures in my course, and if I remember correctly, in the first year you spoke on data security issues but then later you came back and lectured on a broader array of issues in the world of research regulation and oversight.
Although we did just touch on your history, through your various titles and roles, I'd love to hear you talk a bit more about your journey through the clinical trials and IRB world. What drew you to the space initially?
SR: Sure. So... it's complicated. I was trained as a hematologist at NIH and practiced in the intramural program. But I was always attracted to informatics. I think had I been born a couple of generations later I might have been a computer scientist. So I ended up being the guy who ran the network in the lab. One thing led to another, and basically, I ended up being the Chief Information Officer of the NIH Clinical Center.
The interesting thing about that, and a thread that connects everything I've done, is that everything is about data. When I was at NIH, we used to talk about computers as "cognitive extenders".
Here's an example of what I mean by this: In hematology, one of the key measurements we use to assess anemia is the hematocrit, which is the percentage by volume of red blood cells in the blood. That percentage is an artifact of a test that is at least 100 years old, where gather blood, put it in the centrifuge, spin it down, and you see how much of the proportion of the volume is the red blood cells.
Well, no one does that anymore and that's obviously a very derivative measurement. It doesn't tell you anything about the function or shape or size or volumes of red blood cells directly. Now we have all these new tools, but we still train people to use hematocrits. Instead, we could have computers put things together in a way that a hematologist or any physician looking at data about someone's blood could actually see something that's directly meaningful and corresponds to function. That would be using computers as a true "cognitive extender".
Unfortunately, in medicine, we chose to use computers as more convenient filing systems and billing systems. I see that as a real loss of mission.
SH: Why do you think that is? Why do you think medicine has taken this path of using computers and data for convenience or billing rather than illuminating more patient-centered or substantive clinical judgments?
SR: I think that it's cultural about the practice of medicine. Medicine is very tradition bound. My father was a doctor and he used to tell me when I was in medical school that everything he'd learned had turned out to be wrong right. That's not not far from the truth!
I just read an article today on depression and how the chemical imbalance hypothesis—which
was all the rage 30 years ago—was just a gross oversimplification. The article made the point that just because aspirin brings down your temperature, that doesn't mean a fever is caused by an absence of aspirin.
There's a lot of this sort of backwards logic in medicine. Electronic health records (EHR) are another example of this. These EHR systems are very expensive to put in. They represent a huge investment for an organization. But in the end, the system is just about billing and coding and other things that are not fundamentally interesting to a doctor or to a scientist.
SH: Let's take a step further down the road here and talk a little bit about your transition to the IRB world. You started in bioinformatics at the NIH, but how did you get plugged into IRBs?
SR: It's worth noting that my interest in informatics and my talk about maybe having been a computer scientist in another life is not because data by itself fascinates me, or because you can do cool graphs and things. In fact, I was a physics major in college and I remember an epiphany I had in my senior year. I watched a PhD student present their thesis and it was two clouds of green dots that came together and then went away. I remember thinking to myself: You're going to spend another five years making a movie like that?
In other words: It's the human side of all this that's fundamentally interesting to me. That's really what I mean when I talk about cognitive extenders. Computers should serve our human interests. And IRBs are right at the center of the human side of scientific research.
The other thing is that in academic medicine the way forward is to become more and more specialized, more and more of an expert on smaller and smaller pieces of the world. And that's fine, but that wasn't what was interesting to me. I was always interested in seeing the bigger picture.
IRBs are at a unique place to see that bigger picture—particularly high-volume IRBs. You see everything. It's fascinating! It's really sort of the ideal job.
SH: Let's unpack that some more. So there you are, with the ideal job as a chair with a high-volume IRB. What are you seeing? What excited you the most?
SR: Well, when I was president and CEO of Western IRB, that was a business job. It had nothing to do with the reviews. For those of you who knew Western IRB, it was a great IRB. But the founder wanted to retire and she sold the company to investors. The investors were looking for a CEO. I had an informatics background and an MBA, and they were at least theoretically interested in data and monetizing data.
So Western IRB was the first IRB that transitioned to investor ownership. And when I stepped into the CEO role, people said what a relief it was to get a doctor and a researcher in that seat. They were worried it would be a finance person.
But I had been in the public sector my whole career, so I was a bit naive entering into this arrangement. My interests and the interests of the investors for value, as opposed to the interests of subject protections, advancing science, and other things were not entirely aligned. So I left Western IRB after two years.
Then I was offered the job of the executive board chair at Quorum review, which was another for-profit private IRB, but it was family owned not investor owned. My training was shadowing my predecessor for a couple of weeks and then I was expected to chair five meetings a week. So in addition to absorbing Robert's Rules of Order, I had a lot of catching up to do as far as research regulations and process and meeting management and other things.
For the first month, everything was new and it was wonderful. I learned all these new things. The boards had great discussions. They operated just as they should—with appropriate perspectives shared and discussed for as long as it took to make determinations in the best interests of research participants.
But at five meetings a week, it didn't take long for me to realize that I couldn't possibly remember everything I'd done. The IRB systems we used—and I think are used by most—are really process systems. They don't help you make good decisions. They don't even help you look at related decisions, like for example, over the five years say that a protocol is under your jurisdiction. It's hard to go back and understand why the board made the decisions it did on the same protocol much less on a related protocol.
So just from a personal perspective, I started keeping an electronic notebook that was cross-referenced. But then it soon became clear that I had to develop something that would let me do more than this, and so I developed my own electronic database/notebook for IRB determinations. This made it so that, before a meeting, I could share notes with all of the other board members and all the background from what we had done in previous meetings.
All IRBs do record their deliberations in minutes, but what's required to be recorded in minutes is really only controversial issues. And even in that case, the minutes are an institutional and corporate liability. So you put in as little as you possibly can, and this makes them essentially useless for actually understanding why people make decisions.
For me, the job would have just been a drudge had I not had some system that allowed me to learn and to build. So I built a system. And that was great. And I'm not aware of any any other IRB that uses anything like that.
And you know, it's not just about understanding the decisions that your board has made on a protocol. I think we owe investigators and research participants consistency, or at least justified inconsistency in our decision. Otherwise, we're just arbitrary because we have no idea what we did before. I think that's a disservice to everyone. And this is easy to fix, but it's not been a priority for systems that are designed, like electronic health records, to support process and billing and other things.
This is, in part, why I started Northstar Review Board as a non-profit IRB because I think the mission of the IRB to protect research participants and to support the responsible conduct of science fits squarely in the requirements for 501c3. And the IRS agreed with me.
A Data-Driven Knowledge Vault for IRBs
SH: Let's go back a step and talk more about what I'll call your "knowledge vault"—that is, the system that you created to track and understand the decisions that your IRB made.
SR: There's been a lot of talk within the past five or six years about IRB precedent. A lot of the people who are in the IRB world and in the bioethics world have a legal background. So precedent makes a lot of sense to them and they hear this problem they think, "Oh, president! That's how we do it."
In the courts, precedent is a very structured and well-defined system. I think this is really hard to implement for IRBs. I mean, we don't know what the standards of good judgment should be. So when I created this system, it allowed me to do several things:
It allowed me to look back over the history of a protocol and see all the notes I had taken including notes and meetings and other things—so that supported within-protocol consistency.
It allowed me to look at any protocol that was from the same sponsor or looked at the same intervention—so that helped with consistency across sponsors or across similar interventions.
It allowed me to think more globally about safety: What often happens is you'll get a safety report as an IRB about an unanticipated problem with a product. You might have eight protocols under your purview for study's of that product, and the safety report is submitted by an investigator on a single protocol. As the IRB, you could simply wait for the sponsor to submit the report for their other protocols, but really—you now have safety information on that intervention in hand and you should be able to look at all the other protocols and see whether this is relevant, whether you should ask for a report in the context of these other protocols, and whether the consent needs to be changed, and so on.
It allowed me to think more globally about the dangers of uninformative research. This is something that you and I have talked about extensively. One of the problems with the way IRBs operate is that they look at protocols one at a time. So you can look at the scientific design of a study and say, "Yeah, in an ideal world you would enroll this number of patients and you'll be able to answer the question, etc. etc." But that ignores that there might be 10 other protocols that are trying to enroll from the same patient population.
So from where you sit on the IRB, you can look at the assumptions underlying recruitment and see whether what you're seeing in front of you is actually achievable. Because if the study can't answer the question, and particularly if the protocol represents risk to people, then you're asking people to take on risk in in full knowledge that their contributions are not going to be meaningful.
Then the other piece of uninformative research is when the question may have already been answered. There are lots of me too protocols. Of course, investigator autonomy is an academic tradition, and sometimes the investigators can frame their studies slightly differently. But again, if you know that a question has been answered, then you're asking people to put their minds and their health knowing full well that there is little or nothing to be gained.
Bottom line: If the question was already answered, then you shouldn't be doing the study. IRBs have no tools to look at that, and I think that's pretty basic ethics.
U.S. GAO Report on IRBs
SH: Why don't we transition and talk a little bit about the Government Accountability Office's (GAO) report from January 2023. in this report, they reviewed the existing laws and regulations, as well as some of the recent literature around IRBs and IRB guidance. They looked at registration data about how many IRBs exist today. They looked at drug applications inspection data from FDA, IRB inspection data from the Office of Human Research Protections (OHRP). They conducted interviews with officials and other folks from 11 different IRBs.
Broadly speaking, the main finding is that no one is really paying sufficient attention to what IRBs are doing. No one is looking to see if IRBs are actually fulfilling their mission: Are they protecting human subjects well?
But that's a pretty high level summary. I'd be curious to get your thoughts on the report. What jumped out at you as the most significant findings?
SR: I don't know if you captured me laughing when you brought up the topic, but I think the report missed the point.
The genesis of this GAO investigation was a 2019 letter issued by Senators Warren, Sanders, and Brown that observed the enormous consolidation in the IRB industry—that much of the work by IRBs is done now by private equity funded companies. So they asked what protections were in place, given the obvious conflicts of interest. What checks were in place to make sure that private IRBs paid appropriate attention to research participant protections?
I would have loved to have known what 11 IRBs they talked to... but their findings were as you summarized: (1) FDA and OHRP don't do enough inspections, so that there's no check even that what IRBs are doing is compliant with the regulations. (2) We have no idea what IRB quality means. So are we protecting participants? What does it mean to protect participants?
So I think both of those are right. But there's a longer history here. The OHRP has gone through periods when it was very active in inspections. When I was chair of the Secretary's Advisory Committee on Human Research Protections (SACHRP), we were asked to comment on an Office of the Inspector General (OIG) report that questioned whether OHRP was sufficiently independent.
The genesis of that OIG report was that there was a controversial trial called "SUPPORT", which was about oxygen levels in premature infants. OHRP came out very critical of the consent form in that trial because they felt it did not disclose the risks appropriately.
And there were other issues with the trial: As one woman who testified before Congress said, "You have a premature baby and they come to you with a protocol called 'SUPPORT'. Are you not going to sign on?"
So there were significant issues with the SUPPORT trial—scientific and ethical. I don't know what the right answer was in that case. But lines were drawn. This was an NIH-funded trial and the NIH bridled at OHRP's interference. Some officials at the NIH published a New York Times editorial critical of OHRP.
And OHRP's budget—particularly when compared to the growing budget for NIH and other research—has been shrinking for decades. So to suggest, as the recent GAO report does, that OHRP is not doing enough inspections is absolutely true. But the reason they're not doing enough inspections is because other parts of the government and and Industry are not interested in them doing inspections and are cutting their resources so they can't do inspections.
I think the same observations apply to FDA. To put the onus on the FDA or OHRP really ignores what's going on and sets us up to just repeat the cycle. FDA and OHRP may get a boost in funding to do more inspections. They'll shut something down that someone doesn't like. Then their funding will be cut. And we won't hear about any of this. No one hears about the OHRP budget or the portion of the FDA budget that's allocated for inspection.
So I thought that to ignore the broader historical and social dynamics that led to those lower inspections was short-sighted and just invites more of the same.
I also think it's true that an aggressive inspection campaign from OHRP and from the FDA would burden researchers. It would likely reduce research volumes. But research volumes are not an end in themselves.
The NIH recently put out a revision to their genomic data sharing policy and there's a statement in there that just drove me crazy. They say that the success of the existing policy has been demonstrated because there have been something like 50,000 studies that have used this data.
But they're talking about genomic data, which affects individual privacy. It doesn't matter how many studies have been done. What's the impact been on public health?
So we just have all of this stuff backwards. It's like the aspirin and fever thing. Or with investor-owned IRBs: If you follow the dogma that the purpose of the corporation is to deliver value to investors, then you expect value to investors to ensure research participant protections. But if you followed the mission, it should be that protection of research participants delivers value to investors. But those are not necessarily the same. It's a logical fallacy to equate the two.
The other part of the report which you mentioned is that we really don't have generally accepted measures for the quality of IRB review. I think in the absence of such measure, we've fallen back on regulatory compliance. But if you look at the regulatory criteria for research that an IRB is supposed to follow, the first one is that risks are minimized, and the second one is that the potential benefit of answering the scientific question justifies the risks. These are pretty basic questions, but they are very subjective. So who gets to judge whether risk is acceptable or not? Who gets to judge what the scientific impact will be? Who gets to judge whether risks are sufficiently minimized?
The regulations were written in recognition of the subjectivity of those requirements, and that's why we have requirements for IRB membership. It's not supposed to be just the scientists sitting there and deciding—that's what we did before and history has shown that does not work. So now you have these other individuals who get to judge... but there's nothing in the regulations or the proposed measures of the GAO report that talks about decisional quality.
And these issues are not unique to investor-owned IRBs. I know of an academic IRB that will review 40 protocols (or 40 "items" because some of them are new protocols, some of them are amendments or other things) in two hours. You may be able to check the boxes so that you appear to be compliant with the regulations, but that cannot possibly be a quality review.
SH: No, just on the face it, that makes no sense. You couldn't even read 40 abstracts in two hours and actually understand them.
SR: Right. The reality is that most of the people sitting on that board have read maybe one thing if they're presenting, and nothing else.
The other thing that's gone on is the individual who's assigned the responsibility to present the science to the board does all their work ahead of time and it's shared in the electronic system. So basically the meetings become a rubber step. I have to say: In addition to that being, I think, a betrayal of research participants and their protections, I also think it entirely devalues the IRB process.
Earlier I talked about this as the best job in the world. What made it the best job in the world was the science, and sitting around the table with colleagues you respected talking about it, and talking about significant issues. It takes time to do that. It takes time to deliver a quality IRB judgment. I think what the GAO report calls for—which is convening a group of stakeholders to determine what IRB quality is—is not going to get to that.
Yet, despite all that, I think we still have to acknowledge that the IRB system as a whole has had a huge positive impact. We haven't had another Tuskegee, at least that we know about. Occasionally problems do come up, but they're on an entirely different scale. I think it works—just knowing that there's independent review works. So against that, how do you measure whether a specific IRB is preventing preventable harms? Prevention of harms is a really hard thing to measure anyway, so I think that's the wrong way to approach it. Instead, I think we should be assessing decisional quality, meeting contributions, things that are hard and don't lend themselves to quick numbers. But these are nevertheless the real measures of the quality of what the IRB does.
Everything is Data
SH: When we were arranging this conversation, you suggested that one perspective you wanted to share is the idea that everything is data. Let's dive into that. What does "everything is data" mean to you and how do you see that intersecting with the issues we've been discussing?
SR: That perspective comes from the trajectory of my career, as well as the fact that IRBs are part of the research enterprise. IRBs have to make independent decisions, but they're part of the research bureaucracy. Thinking about the issues we discussed earlier—about precedent and consistency—I'm not sure why the decisions an IRB makes are not held to the same standard as, for example, when FDA has to review the data to support the safety and efficacy of a new drug.
I mean, the data (on the IRBs decision making) is just not there. On the IRB, you have six people who sit in the room, all bringing their different perspectives. They discuss the protocols. They bring their personal biases—because we all have biases. But you don't learn anything, because if you don't capture those decisions and their rationales, you will not learn from them.
You know, I'm trained as an undergraduate in physics, a doctor, medical researcher. I'm used to quantitative data. I'm not a qualitative researcher. So we get out of my area of expertise when you talk about how exactly to collect data on IRB decision making. But that's not a reason not to do it. I think that it's really important data.
Moreover, when the research regulations were passed, they were based on very traditional Enlightenment-era ethical principles. But now we have genomic research and germ-line genomic interventions. We have ubiquitous data sharing, which come with issues of privacy. IRBs need help in figuring out how to apply those principles to these new issues—and they need data to do this well.
The demographics of our society are also quite different than they were in the 1980s. There are political issues that bother people—where people don't feel appropriately served by their country. These are matters of ethics at some level, and they change and evolve over time. Why shouldn't we be evolving with them?
I think unless we collect data of some form on IRBs, and unless that data is available and transparent—then we shouldn't pretend IRBs have anything to do with science.
We're also missing a huge opportunity. We're at a crisis and trust in science, public health, and government. The pandemic Illustrated this. You can of course speculate about whether that crisis comes from behaving in an untrustworthy fashion or whether it comes from political manipulation. It doesn't matter. We're in a crisis of trust.
The requirement for IRB review was instituted after Tuskegee, in response, I think, to get ahead of what was inevitable public distrust in the research enterprise—particularly the government-funded research enterprise. So now we have this tool: the IRB system, the system for ethical oversight. Yet the public doesn't even knows it's there! If we're trying to rebuild trust, shouldn't we be trumpeting this system and making this as robust and transparent as possible?
Yes, people are going to disagree with IRB decisions. People also don't have to participate in research. But the general trend of what IRBs find acceptable should be refracted through the lens of society. So it's disappointing that the GAO report doesn't seem to see this. The government should be trumpeting the fact that we have this system and should be looking for ways to make it accountable to the public and to research participants. They should be using it as a tool to start to address some of the issues of distrust.
SH: You're right. If you think about what we saw with the broad skepticism about vaccines and research in the COVID-19 pandemic: Some of that was healthy skepticism. But the role of IRBs and research regulations in safeguarding the interest of subjects in those trials—in safeguarding the integrity of the process—was totally lost.
SR: Right. I think that people in the scientific community are generally unwilling to admit how little we actually know. That's just endemic and it's probably always been that way. But we have to be honest about medicine. As my father said: Everything he learned changed. I have to say: Many of the things I've learned have changed. We shouldn't be telling people we have the answers. We should be telling people that we're trying to figure it out. This is our best information at present. This is why we think it's this way, etc. And data is the best way to do that.
What do you see as the biggest data gaps in the field?
SH: Why don't we move now to our three questions that we like to ask everybody. So the first is what do you see as the biggest data gaps in the field? You've touched on this already, but curious if you might single any particular issues out.
SR: IRBs. It's a huge data gap. What data is there is held within corporate and academic walls. It's not shared. So I think that's just a huge issue.
It's not hard to solve, but the resources will come out of what would otherwise go to investor profits or other parts of institutional budgets. As people say in academia: IRBs are not profit centers.
When I was in the academic world, the idea that you were judged by what you whether you're a profit center or not for your institution would have been quite strange. But that seems to be the world we're in now.
What excites you most about the future of research?
SH: All right! Question number two: What excites you the most about the future of research?
SR: I go back to this business about cognitive extenders. I think we have unprecedented tools.
I don't happen to think they're being used for interesting things. They're typically being used for commercial purposes, not scientific purposes. But we are at a different stage in terms of the tools we have to analyze data. I think properly used, it could be revolutionary.
I don't really see AI and Big Data and all these things replacing or automating physician decision-making. But I can see them providing us much better insights into disease. Just think about how complex the immune system has turned out to be with all of the different subclasses of cells. No one can hold that in their head. But build a neural network or something similar, for example, and suddenly you have tools that you never had before.
I know some people have talked about using AI for IRB decisions. I'm not sure that's appropriate. I think that's a bit of a stretch and I'm not sure it gains us anything.
But in terms of science—Wow! What a time to be alive.
Wave the magic wand...
SH: Question number three is "wave the magic wand". If there's something that you could wave a magic wand and change about research, about the industry: What do you think you would pick?
SR: That's easy. I would make all IRBs non-profit. Just take profit out and return them to their mission.
It goes to a deeper issue of mission being lost. I've heard conversations where people are pricing gene therapy drugs at four million dollars an application because that's what its worth to the individual to be cured or the future cost to society. That doesn't make sense to me.
I think that if you have a treatment for sickle cell disease, everybody who has sickle cell disease should get it. It should be priced to make a good profit and to decrease the nation's health care costs significantly. Instead we have these crazy ideas.
So I think, in all these domains, we really have to remember why we're doing things. Money is a tool. It's rarely a purpose. That's a broad societal comment, I know. But I think we've sort of lost our way and we've certainly lost our way with for-profit IRBs.
SH: Stephen, thank you so much! We really appreciate the time, and it's been a fascinating discussion with you, as always.
SR: This was great. Always a joy, Spencer.