Have you ever looked back on a decision and thought I should have known this wouldn’t end well?
Yeah. Me too.
But that “should have” is one of the most unproductive lies we tell ourselves. It’s a cognitive trap, and it’s annoyingly easy to fall into because often, the only reason we now know something would end badly is that it did. It’s classic ‘hindsight bias’. You can’t factor information into a decision if you didn’t have that information at the time, but your brain loves to replay every bad decision like a director’s cut commentary and pretend it was obvious all along.
We don’t just do this to ourselves. We apply this faulty logic to other people, too. Often, once we’ve learned the very basics of a subject, we start to mistake foundational knowledge for common knowledge. We assume everyone else knows it, too. And then we end up saying things that are unintentionally baffling to people who do not, in fact, have the underlying information necessary to understand what we’re talking about. What can happen when we do this?
Here’s a personal example (that I can’t quite believe I’m putting on the internet, but whatever). I was having a conversation with someone about kidney research. My knowledge of kidneys at that time was… minimal. Like “I vaguely remember a diagram in a high school biology textbook” minimum. I had a mental image of a nephron; specifically, something along the lines of this sketch that looks like the poolside water chute of my nightmares (I always think the end of the tube will be blocked and I’ll be trapped inside forever… that’s not just me, right?):
In my mind, that nephron was the kidney. Not part of the kidney. Not a functional unit of the kidney. The whole kidney. Like this:
So, I’m chatting with this kidney researcher, and he’s talking through some method involving staining 50,000 nephrons. And all I could think was… “How the hell did you get 50,000 kidneys?”. The rest of the conversation kind of went over my head as I was internally picturing one of our walk-in cold rooms filled wall-to-wall with human kidneys. It wasn’t until I looked it up that I discovered that the kidney is not, in fact, one giant nephron. Each human kidney has about one million nephrons in it. I’m just glad I didn’t ask about the 50,000 kidneys.
That’s somewhat of a ridiculous example, but if you’ve ever sat through a talk on a topic in your own field and walked out thinking, “I have absolutely no idea what the f*** he was talking about”, you’ve likely witnessed the Curse of Knowledge in the wild. This term was coined by three economists in 1989 to describe the tendency of experts to forget what it’s like to not be experts and to assume everyone else has the same knowledge they do.
This phenomenon comes up frequently in research settings. Researchers can be really, really good at making complex things even more complex. Using jargon to explain jargon. Referencing “that Brown paper from 1996”. Throwing in ‘common’ abbreviations without definitions even though those abbreviations could mean more than one thing. How do you read “NLP”, for example? Natural Language Processing or Neuro-Linguistic Programming? It’s not always obvious from the context if the context is also complex.
The problem isn’t that the audience isn’t intelligent, but that the person presenting the information hasn’t provided enough information for people to build understanding on. Imagine being given a box of IKEA bookshelves to assemble without directions. Those things are challenging enough with the directions.
Providing insufficient background material creates miscommunications that lead to misunderstandings and inaccurate assumptions. Scientific research is always building on something. There are few, if any, entirely novel studies that are completely disconnected from the decades of research that came before them. The prior studies are scaffolding, and when you leave the scaffolding out when you write about your findings, your conclusions can fall apart in the reader’s mind.
Overly complex and incomplete information undermines understanding. The audience will hear your conclusion, but without the foundational elements in place, they might struggle to contextualize it within the field, completely misunderstand it, and walk away thinking you meant the exact opposite of what you said. (This is always fun when your misinterpreted idea gets referenced as justification in someone else’s grant application or dumped in a review article that gets more citations than your original correct version.)
So, what can we do about this when communicating scientific findings?
Get someone else to read your draft
This is so obvious that it almost seems insulting to suggest it, yet most of us are terrible at editing our own work. I provide professional editing services for academics (sorry, just a minor ad there), and even I would have someone else proofread my research if I ever decided to do research again. This isn’t because we’re bad editors or don’t care; it’s because we already know what we’re trying to say. It’s really hard to spot logic gaps and confusing statements when we know what we meant when we wrote something.
The fix for this is to give it to someone who doesn’t have that information. Ask them to read it and highlight the parts that are confusing to them or read like you’ve skipped a step in the logic. Grad students, postdocs, and your colleagues in different fields are a great option for this if you don’t want to go down the academic editing route.
Remind yourself of the why and the so what?
This sounds obvious, too, but it’s surprisingly easy to lose sight of your core argument when you’re halfway through the writing process. So, once you’ve finished that first draft, ask yourself these questions:
· What is the point of this study?
· Why did I do this work?
· Why should people care that I did this work?
· Who is this information for?
· What is the most important thing I want my audience to take from this?
Go through the paper with these questions in mind. If you can’t answer them all in one or two plain English (no jargon!) sentences, your paper is probably more confusing than it needs to be and is not telling the reader clearly why the research is important and why they should care about it.
Assume your reader knows nothing
OK, not nothing, nothing. You’re writing for your peers, but unless those peers are substantially more talented in mind-reading than anyone I’ve ever worked with, you still need to give them all the information they need to follow your argument without having to open 182 new browser tabs to untangle your terminology. (We’ve all read those papers. Do we remember anything about them?)
Avoid assuming that your audience knows what you know. If something is absolutely foundational to your argument, spell it out. Briefly, clearly, and perhaps not in a tone that suggests “I can’t believe you didn’t know this already”.
Use transitions to build your narrative
One thing I encounter quite frequently while editing is a narrative that presents five separate ideas and then explains how these all fit together in a single paragraph at the end. This type of structure requires the reader to go back and read the thing again to fully appreciate how everything fits together. That might have worked for the movie Memento, but it’s not a strategy for presenting research.
One of the easiest ways to get your audience lost is to abruptly shift between ideas without context or warning. It’s fine to write your article in separate chunks as you’re getting your head around how you want to present it, but don’t just glue your paragraphs together and hope for the best. (This is one reason I don’t like dissertations that are just 4 papers stapled together, but that’s a rant for another day).
Tell your audience why you’re taking them from one idea to the next. Build the transitions into your paragraphs. Point out connections. Use phrases like “This is important because…” and “Building on this finding, we…” to help your audience follow your logic. It doesn’t need to be fancy, just cohesive.
Don’t write like you eat thesaurus pages for breakfast
Complex ideas have enough complexity by themselves. They don’t need complex language to go with them. Using overly academic-sounding phrasing obscures the impact of your message because every time your audience has to look something up, they’re pulling their attention away from your topic. Everything should be clear from the information you provide, and you’re not dumbing it down; you’re providing clarity and readability without unnecessary padding.
The plethora of epistemological frameworks deployed throughout the discourse exemplifies the post-structuralist fragmentation of… OR We used different theoretical models to explore how this idea plays out in practice. Which one would you prefer to read?
Provide ‘maps’ for your figures
A complicated figure explains nothing if your audience has no clue what they’re looking at. Sticking arrows on images doesn’t help if you don’t clearly state what the arrows are pointing to and why. I have seen many a figure legend that describes the results shown in the figure but absolutely nothing about where to find those results in the figure itself. Make sure your figure legends are as detailed and descriptive as they need to be for someone to understand what they are showing without having to go back to the results section to find out what the different labels mean.
Consider clarity throughout your manuscript
There are several places throughout a scientific paper where the curse of knowledge can trap you.
· Abstract: Your abstract is the one part of your paper that most people will actually read, even if they never make it past the first sentence of your introduction. It has to do more than just summarise; it has to make people care and communicate clearly. This means clearly stating the problem your study addresses and what your solution is. If the abstract doesn’t explain the why, the reviewers and readers will fill in the gaps and assume whatever they think is right about what your paper is about. They’ll often be wrong, because they’re using their knowledge to fill the gaps, not the knowledge you have that you haven’t communicated. Don’t bury your main finding in the last sentence like it’s a plot twist. You want to publish in Nature, not on Netflix.
· Introduction: This is where the biggest trap lies. You’ve been living inside this research question for months (if not years), so you subconsciously forget that other people haven’t and skip the context. I see a lot of papers that jump right into specifics without providing the background literature that’s critical for making those specifics make sense. If your audience doesn’t understand why your study matters or what problem you’re trying to solve, they’re not going to appreciate your findings. Make sure your introduction actually introduces your work.
· Methods: This section is often full of too much information or not enough. I don’t need to know the brand of ceiling tile in your flow cytometry room, but saying that you used “mice” isn’t enough (these are, unfortunately, both real examples). Be specific, clear, and comprehensive. Do not write “per standard methods” or “as previously described” without a clear reference to where those methods can be found. Do not reference a method in a paper that also references that method “as previously described” in another paper and lead your audience down a 10-paper-deep rabbit hole to find the paper where the method used was actually written in full. There is no method that everybody knows how to do. Even Western blotting. I’ve done Western blots in about 20 different ways. Be specific!
· Results: It’s surprisingly easy to accidentally create a confusing methods section. How? By dropping in data without explaining how it connects to your research question or simply listing your results in the order you obtained them instead of rearranging them to align with your narrative. If your results section looks like someone dumped a bunch of graphs on a table and said, “Here’s what we found”, it needs some work. Make sure every section has a few sentences explaining how it fits into the overall story of your project. It should be abundantly clear to the audience exactly why they are looking at a piece of data and why it’s being shown at that point in the paper.
· Discussion: This section should tell your audience what your results mean, how they fit into the broader context, and what your study’s implications are. Where things can go wrong here is vague descriptions, mentions of findings aligning with or contrasting with previous studies without a clear description of what those previous studies demonstrated, and overstating of significance. Don’t forget that your audience hasn’t read the literature you’re contextualizing. The relevance might not be self-evident, so explain the context you’re working in.
TL; DR:
There’s no prize for making your work sound difficult, so don’t forget to explain the foundational knowledge your audience needs to know to understand the impact of your study.