Everyone Says You Must Use AI. Everyone Hates When You Do.
(And how to respond when Reviewer #2 thinks your manuscript was written by a robot)
Yesterday, I came across this post on LinkedIn:
I’ve been thinking about AI use a lot recently, mainly because of the sheer volume of AI and AI-related content that comes across my social media feeds. What’s interesting to me is the two most common themes in these posts:
Promoting AI
Discrediting people for using it
The AI paradox
We’re constantly hearing messages that AI adoption is a necessity if we want to remain competitive, whether we’re an individual or a business. When I worked for a grant writing agency last year, almost every founder I talked with was integrating AI into their product. Investors are pouring billions into AI startups, governments are launching national AI strategies, funders are prioritizing AI-enhanced projects, and individuals expressing fears about being replaced by AI are often encouraged to adopt it to stay relevant.
You have to get on board with AI, or you’ll be left behind in your career
Adapt or become obsolete
The future belongs to those who embrace AI
Even tools we’ve had for decades have been updated to include an AI element (whether helpful or almost entirely useless… Google AI search, I’m looking at you). Given the amount of funding going toward AI development right now, it’s going to become increasingly harder to work without using it, even if indirectly. And at the organizational level, it’s quite accepted. Educational institutions are incorporating AI literacy into their curricula, professional development programs center around AI tools, job descriptions increasingly list AI proficiency as a desired skill…
But if you spend more than five minutes on LinkedIn, if your feed is anything like mine, you’ll see multiple posts questioning people’s credibility if they use AI, comments calling out posters for using AI (“So obviously ChatGPT wrote this…”), and, my personal favorite (</sarcasm), endless long rants about how em dashes are the most obvious sign of AI writing. RIP to my favorite punctuation mark.
People are dedicating their time and energy to catching others out, identifying and publicly calling out suspected AI use. In response to AI tools that do things for us, we’re getting AI tools developed that can detect when AI has been used (supposedly—one of these flagged text I wrote years before AI existed as almost 100% AI-generated). And then there’s tools that detect AI writing and claim to rewrite it so it sounds more human.
So we’re being sold AI everywhere… then being criticized for using it. If using AI is considered bad or makes someone untrustworthy, who exactly is the target market for all this AI technology?
Are we developing an entire industry worth hundreds of billions of dollars, creating tools that are being integrated into every major software platform, investing enormous resources into making AI more capable and accessible just so people can... not use them? So they can use them in secret and live in fear of being caught?
This makes no sense from any angle. We can’t logically build an AI economy and AI stigma simultaneously. We can’t push adoption while punishing adopters. We can’t market tools as essential while treating their users as suspect. But we are. Because we have yet to develop a mature, coherent understanding of how AI should fit into professional life. We’re caught between this hype cycle pushing adoption and a legitimate concern about maintaining standards.
The result is this contradictory messaging where AI is simultaneously mandatory and forbidden, which helps no one.
I’m not anti-AI, but…
If you know me or have seen some of my previous content on AI, you’ll know I’ve been against AI-generated writing since these tools first emerged. I continue to be horrified by what AI has done to the careers of writers, editors, and other creatives. We’re often told that it’s a myth that AI will replace us by people who are actively replacing large parts of our processes with AI right in front of us. Just over the past couple of months, I’ve noticed a massive increase in writing and editing work that essentially boils down to “rewrite this so it sounds more human” or “make sure it’s not obvious we wrote this with AI”. I’ve seen countless posts on LinkedIn from copywriters desperately seeking work after having decades-long successful careers, people losing long-standing clients as companies decide that good enough AI content is preferable to paying a human. It’s probably going to ruin the internet (AI search tools scraping AI-generated content to give AI-generated responses). It seems dystopian.
But reality demands nuance
AI isn’t going anywhere. That ship has sailed, and no amount of individual resistance will call it back. AI tech is being integrated into virtually every tool and platform we use. Try finding new software without an AI component. Go ahead, I’ll wait... Within a few years, avoiding AI entirely may be as impractical as refusing to use computers was in the 1990s.
If institutional forces across society are pushing AI adoption and if rejecting these tools is career-limiting or professionally disadvantageous, it cannot simultaneously be appropriate to devalue people for using them.
If someone uses AI to help draft a section of a scientific paper, then carefully edits and fact-checks that content, adds their expertise and insight, and produces work that is accurate and valuable… what exactly is the problem? The outcome is what matters. If the research is sound, the methodology robust, and the conclusions valid, does it really matter whether a language model helped with the initial phrasing?
We don’t shame academics for using reference management software instead of manually formatting bibliographies.
We don’t discredit researchers for using statistical software instead of calculating everything by hand.
We don’t question the legitimacy of work that was spell-checked by a computer rather than a human proofreader.
These tools help us work more efficiently while maintaining quality… so, why not AI?
Judge the product, not the production method
I work at the intersection of scientific research and crisis communications, particularly around reputation management. (Ironically, I branched into crisis comms because AI is reducing the viability of writing and editing as a long-term career path). The AI accusation in that LinkedIn post above isn’t really about AI. It’s a reflection of how AI is now being used against people in the same way that other things were used against people before it… AI is just a convenient weapon now. It’s the new “you didn’t cite my paper” or “your methodology has this tiny flaw”, “have a native English speaker proofread your paper”. A way to discredit work or damage reputations without engaging with the substance.
Weaponizing AI accusations is counterproductive and discourages experimentation with tools that might genuinely improve efficiency or accessibility. It distracts from substantive discussions about quality, accuracy, and impact. And it wastes enormous amounts of time and energy, both for those making accusations and those defending themselves. Pretending AI isn’t being integrated everywhere and trying to maintain a pre-AI status quo is futile.
We need a more nuanced, practical approach to AI in professional contexts. A focus on outcomes, not processes. What matters is whether the work is accurate and valuable, not which tools were used to create it. There’s a massive difference between using AI as one tool among many in a human-led process and simply publishing raw AI output. So we need field-specific standards. Rather than blanket prohibitions or unlimited permission, we should establish appropriate guidelines for AI use in the contexts of our professional fields.
The paradox exists because the rewards of technological speed and efficiency conflict directly with our need for authenticity and trust. Until we can resolve it, we’ll continue to waste time and damage careers over a question that ultimately misses the point: whether the work itself is any good.
We need to focus on responsible use. Where technology enhances but doesn’t replace our contributions. How? I’m not sure… but this issue will need to be resolved.
So what if you’re accused of using AI to write your scientific paper?
Since there is no immediate solution to the trend of AI-use accusations, you might find that you need to respond to one. Being accused of using AI to write your paper is essentially an accusation of academic misconduct, and your response should be cautious, focusing on repairing the perceived threat to your professional integrity.
I know you probably want to throw things, or, Reviewer #2, out of the window when receiving a peer review commentary containing such accusations, but it’s important to do this late at night when others can’t see you resist that instinct to fire off a defensive statement or blame the reviewer. We want to focus on the facts, not fear or anger.
Get all your evidence together so you have a central source of truth. If there are multiple coauthors, make sure to find out whether any of them did use AI. Frame this not as an accusation but as needing to know so that you can put forward the best response that is most likely to result in publication with everyone’s reputation intact. Of course, a coauthor might lie to you… but that’s a topic for another post. Once you have all the facts, your response should focus on two aspects:
Responsibility (did you use AI?)
Offensiveness (did your use of AI impact study outcomes or violate journal guidelines?)
If you didn’t use AI
If you/your coauthors wrote the paper and the accusation is based on misperception or error:
Directly deny the accusation clearly and factually; state that AI tools were not used without necessarily repeating the allegation—use positive or neutral terms
Provide the facts and verifiable details to support the denial; explain the origin of the perceived “AI-like” text
If the reviewer points to ambiguous evidence, be prepared to defend your interpretation of that evidence
Remind the editor/reviewers of your group’s commitment to integrity, professional competence, and past successful work
Example Response:
We appreciate the thorough review of our manuscript and specifically acknowledge the serious concern raised by Reviewer #XX in Comment XX. We understand that in the current academic climate, such concerns are warranted and must be addressed with honesty and transparency.
We confirm that all intellectual content, data interpretation, and narrative text, including the questioned passages, were produced exclusively by the listed authors without the use of AI tools.
We use precise, specialized technical language to avoid ambiguity in describing complex methods. This detailed technical language is highly specific to the method developed for this study and echoes language previously published by our research group in [Citation A] and [Citation B]. We adhere closely to this established phrasing to ensure consistency across our publications.
We take accusations of misconduct seriously. Our publication record evidences our professional competence and dedication to scientific integrity, and we trust that this response resolves the reviewer’s concern regarding AI usage.
If you did use AI and reviewers are questioning the scientific integrity of your work:
If you/your coauthors used AI and this is being used to question the integrity of your work, but it hasn’t altered the outcome/impact of the study, the response should acknowledge the use of AI but reduce the perceived impact of its use by separating it from scientific integrity concerns.
Accept responsibility for the act of using the tool, but immediately distinguish it from scientific misconduct
Demonstrate that the use did not compromise the core scientific results
Reaffirm scientific competence and offer procedural steps to satisfy policy expectations
Example Response:
We sincerely thank the reviewer for their diligent assessment, particularly the crucial point raised in Comment XX concerning the use of AI in drafting certain sections of the manuscript. We fully recognize that questions regarding authorship and transparency are fundamental to scientific integrity.
We confirm that AI tools were used in the presubmission editing phase to refine phrasing in the Introduction and to convert complex notes into clear descriptions in the Materials and Methods section. Our intent was to improve the clarity and accessibility of the narrative and remove ambiguity.
All AI-generated text was derived from human-authored finalized data and core interpretations. All of the text has been fully fact-checked. The AI tool was not used to generate, synthesize, analyze, or interpret data. The information upon which the conclusions are founded remains entirely the product of human intellect.
The integrity of the data and its interpretation relies entirely on our statistical tests (detailed in Section XX) and the theoretical framework described in Section XX. We confirm that the computational output is verifiable and credible. Our use of AI has not altered the outcome or validity of the data and their interpretations.
We accept responsibility for the initial lapse in transparency. In the spirit of proactive corrective action and adherence to evolving best practices, we have inserted a clear disclosure statement regarding the specific AI tools used, confirming that their function was purely for descriptive refinement.
We remain committed to rigorous research and ethical scholarly practices, and we trust that this clarification demonstrates our acceptance of accountability and clarifies the minimal, non-scientific nature of our AI tool use.
If you did use AI and the reviewers indicate that it violates journal guidelines:
If you/your coauthors used AI in a way that violates journal expectations, the best strategy is accommodative, focusing on rebuilding trust and committing to future ethical behavior.
Take responsibility for the mistake to demonstrate accountability in a sincere and not-self-serving manner; avoid undermining your impact by making excuses or shifting blame
If it was unintentional, explain how it happened to minimize perceived responsibility, but do not use this to shift blame
Fix the problem immediately and describe how you will prevent recurrence in concrete steps
Commit to ensuring the integrity of the work submitted and emphasize that the authors are learning from the error and adopting higher standards for AI use
Example Response:
We wish to address the serious concerns raised by Reviewer #XX in Comment XX regarding the use of undisclosed AI tools in the drafting of specific passages of this manuscript. After a thorough internal review, we can confirm that our draft contained elements generated by a large language model that were not properly edited, disclosed, or cited, resulting in an unintentional violation of the journal’s submission policy.
We take full responsibility for this serious oversight. The responsibility lies entirely with the submitting authors. We regret the time and effort wasted by the reviewers and editorial staff in identifying this lapse.
We have thoroughly audited the manuscript and are submitting a revised version with all affected passages completely rewritten manually. We have also fully disclosed all assistance provided by generative AI throughout the drafting process in the acknowledgements section.
We view this as an opportunity to reinforce standards, and to ensure this does not recur, we are implementing a mandatory verification step in our research lab for all coauthors, requiring explicit confirmation that all drafts comply with journal AI policies before submission.
We respectfully submit our revised manuscript containing the corrective actions detailed above and confirm that it meets ethical publication standards.