Generative AI in Science: Enhancing Tools or Shifting Blame?
Written By:
Spencer Hey
If you’re active or interested in the generative-AI-for-science space, you’ve almost certainly encountered the following ethical question: If a generative AI system is being used for science and it outputs an error, who bears responsibility?
Shifting The Blame
At first glance, the question of shifting responsibility to generative AI systems sounds important. It reminds me of the debates around self-driving cars: If the autonomous car causes an accident, who bears the responsibility? Is it the car? (Surely not!) The manufacturer? The people who programmed the driving algorithm? (Maybe…?)
When we think about using generative AI in science, we can pose analogous questions.
If an AI science chatbot analyzes some data and summarizes it, who bears responsibility if that summary is flawed? What if the error has dramatic consequences, like ruining a year’s worth of experiments?
Or what about all those recent cases of AI having obviously written (or at least contributed to) published scientific papers: What if those papers are full of errors or should be retracted? Who is responsible? Is it the authors? The editors who accepted the paper? The people who trained the underlying model used for the text generation?
These questions are interesting. But if I step back and look at the vast amount of work to be done in science, the number of data gaps, the struggle to keep up with the new literature, the resources required to write up experiments, etc… and then I consider the opportunity that generate AI presents to accelerate all of this work… These questions about responsibility start to seem more like distractions. They are all leaping ahead to some future world where humans have given up their agency—where the choice for generative AI in science is either to trust it completely or not use it at all. But that is a false dichotomy.
Generative AI Is A Cognitive Extender
We can instead regard generative AI as simply another tool to advance science; a powerful “cognitive extender” (to borrow a great phrase I learned from Stephen Rosenfeld). Whatever tools I am using to help with my scientific work, it continues to be my responsibility to test, evaluate, and calibrate my tools, whether they involve AI or not.
In other words: As a scientist, it is my responsibility to check that my tools are producing good data or good output. This is no different than the responsibility to validate an assay or calibrate a microscope. I see no reason to think that the involvement of AI shifts this responsibility. If I use generative AI to write a paper, it is still my name that goes on the paper. If that paper is full of “hallucinations”, I can’t point the finger at AI and say, “It’s not my fault. Blame the machine!”
I recently shared the results of an experiment I ran where I used AI to distill an ethical theory from the literature and then apply the theory in generating an analysis of a new case. I think this points toward an exciting use for generative AI. There is potential here to conserve time and intellectual resources by outsourcing some of the boilerplate writing to this kind of tool.
But if I use that approach with generative AI to conduct an ethical analysis, I remain the responsible party. If someone else uses my same approach to conduct an analysis, they are the responsible party.
It seems to me that wanting to shift responsibility to technology in such cases is like the PI who wants to shift the responsibility to their postdoc for fabricating data behind a publication. Yes, the postdoc who fabricated has real agency (which an AI does not). But that does not absolve the PI from their ultimate responsibility. The PI is the scientific guarantor. If the data turns out to be fabricated, they are responsible.
In science, you must check and double-check your data and your instruments–whether analog, digital, or human.
So let’s return to our question: If a generative AI system is being used for science and it outputs an error, who bears responsibility?
I think the answer is simple: The scientist is using the tool. The scientist bears the responsibility.
Reference
Hagendorff T. Mapping the Ethics of Generative AI: A Comprehensive Scoping Review. arXiv preprint arXiv:2402.08323. 2024 Feb 13.
Latest Articles
Prism's Social Science Research Building: A Modern Facility for Cutting-Edge Research
Social Science Research Building (SSRB) is an iconic building located on the University of Chicago campus, with a rich history and architectural significance