There’s an element of the ridiculous about many scientific studies, and when those are pulled out of the bigger picture and featured as attention-grabbing headlines, science can be thrown into the middle of political debate. Wasted funding. Lack of oversight. Fraud.
When I posted a video about NIH funding on Instagram earlier this week, commenters predictably brought up the ‘waste’ of millions on research into shrimp on treadmills and transgender monkeys, among other issues, repeating soundbites from pieces of reporting on research projects that have value extending beyond the ‘ridiculous’ elements that capture a wide audience. The shrimp on treadmills? Part of a study looking at the effects of bacteria on shrimp in shrimp farms (and the treadmill only cost $1000). The transgender monkeys? Part of an investigation into the susceptibility of transwomen to HIV in an effort to protect them.
When looking for a research article to use as an example for this post, I picked this one, “A New Optokinetic Testing Method to Measure Rat Vision” conducted at the University of Southern California, because this type of study is exactly the type that can end up being misrepresented. I can imagine someone sharing a post of a photo of a rat on a podium watching two TV screens captioned “Your tax dollars in action” and the endless slew of comments trashing science, researchers, and government-funded projects after it.
“We’re paying to teach rats to watch TV now?”
No. We’re not. And while many articles will go into the reasons behind studies and somewhat explain what the eventual outcomes are, the sensational headline is often the only thing that someone will see or remember. [I’m hoping the subtitle of this post is sufficient to avoid having anyone spread misinformation about what researchers are doing with rat vision—time will tell]. This has multiple negative consequences. When research is considered wasteful and ridiculous, that diminishes public trust in science, and when that happens, science becomes political in a way that hinders progress. We waste so much time countering misunderstandings and often failing to do so effectively because it’s not as simple as just sharing the published research to demonstrate why it’s not the waste it’s made out to be.
All NIH-funded projects are required to be published open-access. This means that the articles are accessible free to the public and aren’t stuck behind paywalls. It’s a great start, but while the goal is to make science publicly accessible, these articles are only truly accessible to people with enough of a scientific background to understand them. Note that I’m not saying they are only accessible to intelligent people—this isn’t about intelligence, but background knowledge. Many of the most intelligent people I know who are not scientists would not understand the reasoning behind a scientific study from reading one research paper on the topic, just as I don’t understand the innermost workings of their role if they explain them to me using field-specific phrasing and terminology. The problem is that scientific articles are written for scientists, and opening those up to the public is insufficient for increasing public awareness of (a) why a study is done, (b) who the study benefits, and (c) what will be done next with the work.
What is the solution? As is so often the case when I write about problems that need to be solved, I don’t know, but I have one idea that could at least contribute to making research findings more accessible to everyone, and that’s including a public description at the beginning of each article that covers the three points I highlighted above. This won’t stop the sensationalist headlines and attempts to twist representations of what type of work is actually being done in media articles written by people who aren’t particularly interested in portraying the real study significance, but if all papers have a public description, when misleading articles come out, there will be an accessible and brief outline explaining the study that can be rapidly shared with all the text of the full study in the same place. That way, nobody has to go searching for the original article to fact-check the public description, and it will be much harder for people spreading misinformation to do so.
The scientific abstract of “A New Optokinetic Testing Method to Measure Rat Vision” (rightly) focuses solely on the development of a new, cheaper, and simpler apparatus for this test of visual acuity in rats with normal eyes and those with retinal degeneration. Scientists who know what these tests are used for will easily infer what this means, why it’s so important, and that it’s not really about finding out how rats see. How could we make this clear to anyone who reads the article? Here’s an example:
Public Description
Retinal degeneration causes irreversible blindness in >170 million people worldwide. It is very challenging to treat. Researchers simulate retinal degeneration in animal models to test the effectiveness of new treatments that could help preserve or recover the vision of people affected by this disease. Since a rat can’t read letters on a wall, we need a way to measure increases in vision after giving a treatment. We’ve had a way to do this by tracking rats’ eye movements for a while. The method replicates the same test your eye doctor uses to determine your vision. However, the equipment is often prohibitively expensive and requires a lot of training to use, making it difficult for researchers to conduct these important tests. This study makes the testing more accessible to more researchers, thus promoting the future development of treatments that could help the millions of individuals experiencing retinal degeneration see better.
This information shows that the study isn’t focused on rats. It’s focused on helping recover peoples’ vision. We just happen to need a method that measures rat vision to achieve that goal.
It is my opinion that including information like this with every published paper could go a long way toward helping more people understand how basic research, preclinical research, clinical studies, and the treatments they receive when they visit a doctor are all interlinked, and that a ‘ridiculous’ study might be the reason they have access to a medication that makes their life better.
Do you think this would work? How could we go about advocating for it? I’d love to hear your thoughts.