Creating engaging data visualizations that tell a powerful story is a big part of what we do at Prism. For example, for the project we did with the team at Johns Hopkins, we included a bubble plot that showed the landscape of research activity across different music-based interventions and mental health outcomes (see below). In this figure, the location of each bubble corresponds to the particular intervention/outcome combination, the color indicates the direction of effect (e.g., red bubbles indicate studies that showed the music intervention to be superior to the comparator), and the size indicates the number of studies.
So what story does this figure tell? Well, first, we can see a lot of big, red bubbles, which indicates that music interventions have generally been found to be beneficial. Particularly across the psychiatric outcomes category, we can see a high volume of research activity and a clear pattern of efficacy.
Next, we can see that there has been far less research on physical functioning or global quality of life outcomes. Some of the more active or creative interventions (like drumming or composing music) also seem to be far less studied than the broader category of “listening to music”.
We can see that there are negative studies (i.e., when music is not found to be more beneficial than the comparator) at many of the vertices, which should encourage the consumer of the visualization to look more closely and drill down on the data to better understand why this might be the case.
But as much as this figure can tell us about the research landscape, it is equally important to attend to what the figure is not telling us. For example, this figure is not saying anything about the magnitude of effects. The figure is also not saying anything about study quality or the specific characteristics of the patient populations. Particularly for healthcare policymakers, who may be interested in implementing music-based interventions, these details are critical.
After initially sharing a picture of this figure on Twitter, one critic pointed out the danger that viewers would be likely to misinterpret the information. Viewers might reasonably infer, for example, that big red bubbles mean big effect sizes. And although the critic did not elaborate much further (it was Twitter after all), we can extend this line of argument and observe that viewers might also ignore population heterogeneity and study quality as well.
While we very much appreciate this critic’s point, and acknowledge these limitations in this bubble plot, it is important to observe that on Prism’s Music and Mental Illness Landscape, where this figure is presented, it is merely one of more than a dozen visualizations depicting the Johns Hopkins team’s findings. Each of these figures tells a different facet of the story; each figure asks and answers a different set of questions. Each figure is also dynamic in several ways: Viewers can click on any section of any figure to drill down and inspect the underlying data. Viewers can filter the data by study type, population characteristics, interventions, outcomes, etc., and all the visualizations will update to depict the new, filtered dataset in real time.
This brings us to the important insight about what differentiates our approach to evidence synthesis: In what we might call the “old world of data visualization,” figures summarizing data like this are static and often (by necessity) few in number. Therefore, the creators of data visualizations had to make tough decisions about what information to highlight, and what information to leave out, based on what they thought was most important for the viewer to know and all the ways in which the viewer might misinterpret the presentation.
However, in Prism’s world, data visualizations are dynamic and essentially unlimited in number. In our landscapes, we can provide many different ways of seeing and interacting with the data. Thus, limitations or biases in one figure can be explicitly balanced by the presentation in another figure. Questions that are raised (and would go unanswered in a static presentation), can be immediately answered by a click or two.
This is what we mean by “data visualization as a conversation”. Static images that tell a story can be impactful. But dynamic visual representations of data, that allow the user to “touch” the data; to ask new questions and find their own answers… that is more than just impactful. It is empowering. The audience is not simply a “viewer” of the information, they become an active participant in building their own understanding of the research.