That stakeholder isn’t human… but it still has a voice
We talk about AI replacing humans a lot, but the more pressing angle is that AI can ACT LIKE humans. Anything that can influence, interpret your situation, amplify narratives, interpret scenarios, and act as an intermediary is effectively functioning as a stakeholder... and not communicating with stakeholders means they construct the story without you.
The lack of humanity, capacity for emotion, and intent doesn’t matter: impact doesn’t care about intention.
Generative AI is your stakeholder
Why care about generative AI and generative engine optimization?
As a late acceptor of AI (and vocal opponent of its misuse, particularly in academic/research settings), I had been ignoring generative engine optimization (GEO). In fact, I put it on my list of things I hate GAI for. Part of me does hate GAI… but GAI isn’t a love-it-or-hate-it thing. Trying to maintain blanket hatred toward GAI is like trying to hate all men. Its impossible. And much like men, however you feel about them, GAI tools are now a part of all our lives. Even if you don’t use them, you are consuming content and using products that GAI tools have touched at some point in the development process every day.
And with several webinars about GEO in PR/comms recently popping into my timeline, realized I could ignore it no longer.
More and more people are using generative AI as a search tool
It doesn’t matter how many times you tell people that generative AI isn’t a search engine. It will still be used that way.
If you’re an individual or company and people have a reason to be searching you online, they’re going to use GAI to search for information about you. In one survey 83% of participants reported using GAI tools as a search engine. Of these, 91% of frequent AI users reported finding accurate information via popular language models like ChatGPT.
In another survey, 19% reported trusting AI results directly for local product or service research; 10% trusted the first AI result, 48% verified on multiple platforms.
Not everyone will fact-check. Additionally, people can be more likely to remember the first thing they see, even if it’s subsequently corrected. You can’t rely on people to seek out real data that correct misinterpretations GAI might have about you. You have to correct it in a way that will correct what GAI is saying; this means sharing your information where it’s visible to the GAI tools
Generative AI has a voice
The difference between GAI tools and other tech and software is that, human or not, GAI tools have a voice. They have direct influence. They might pull up details about your brand, service, or people and prompt the user to investigate further, with leading questions that they might not otherwise have thought of…
“If you like, I can check past client reviews to get a sense of how effective she is — would you like that?” —ChatGPT when I asked it to describe me
If you don’t have sufficient information out there to enable GAI tools to answer these follow-ups (like me… my ‘finished’ website is now a work-in-progress after determining exactly what GAI *couldn’t* find about me), you may come across as less trustworthy, less credible, less knowledgeable to potential clients or customers. If you or your organization has experienced a public crisis, GAI tools might focus solely on how that crisis was reported by others if YOU don’t have your responses and updates published online on platforms that these tools can access and easily incorporate.
Generative AI in your crisis comms
When something goes wrong, people want answers, and with a high percentage going to GAI tools, these tools are now among the first sources of information for public scrutiny. And they’re efficient in summarizing and interpreting conflicting sources to provide their explanations (which may be partial, biased, incorrect, misleading… not intentionally so, but because GAI tools have a propensity to hallucinate to fill in missing details and interpret context).
That means they are actively influencing the emotional tone surrounding an individual or company, which is particularly important in a crisis situation. Journalists use GAI summaries, employees search GAI for answers, critics and supporters use them to gather information for comments. Like social media algorithms, this accelerates the crisis lifecycle.
And it can increase the longevity of it.
Even after a crisis subsides in the public consciousness, GAI tools continue to surface old events when people ask for information. Historical crises can reappear in future model outputs. AI may anchor explanations around the most documented or emotionally salient version of events.
A GAI model can’t unlearn what it has already synthesized, so we have to focus on giving it more new updates. Repeatedly. These tools prioritize recent information. Outdated information persists unless proactively replaced with better-structured, more authoritative content.
Bringing GAI in as a stakeholder means integrating it into strategy instead of just acknowledging its existence (and maybe hoping it goes away). Knowing what it knows helps you shape how it “speaks” about you. Focus on the tools people use the most:
What does ChatGPT say about your crisis/company/etc today?
Gemini?
Claude?
Perplexity?
Copilot?
Are they anchoring on misinformation?
Are they relying on outdated sources?
Are they drawing from partisan or low-quality narratives?
Outdated or incorrect information can be complemented or replaced with accurate explanations of your organization, operations, and crisis-relevant context on your public websites, in FAQ sections, in executive communications, on social media. in interviews, in explainers, and in digital press kits. Remember that AI can only synthesize what it can access, so the information you want it to be able to find needs to be structured in a way that it can find it. I don’t know how to do this, but I’m using Geoptie to figure it out for my site (not an ad! There are other similar tools, this is just the one I picked).
You can’t “brief” GAI, so you’ll need to publish clarification, repeat corrected information across multiple platforms, provide transparent timelines, create structured explanations (GAI loves structure… I don’t. Don’t ask me about heading levels until at least 2035).
Avoid vague and excessively field-specific that could confuse the models and their users.
What about the academics?
Don’t worry, I haven’t forgotten about you or the name of this publication… GEO and GAI are unfortunately relevant to academics, too. It’s time to for academics to think like more general communicators to get accurate research reporting into GAI tools.
Publications might be the most important research dissemination tool in academia, but if they’re the only description of your research and they’re both dense and paywalled, GAI tools won’t summarize them properly and will include a lot of approximations.
Chat can’t read your paper
AI systems can’t retrieve paywalled research. If your work is behind a paywall, the abstract is all you’ve got, unleee you also have secondary commentary, news articles, social media posts etc. GAI tools will generalize and fill in gaps from abstracts and whatever it can find in adjacent publications that it can access.
Open-access publications can often be incorporated more accurately if AI models have encountered them during training or can directly access them. However, even if GAI tools can access your whole paper, the lack of plain-language explanations in most research papers can make the work difficult for the tools to summarize well for a lay audience.
If you want GAI tools to give the public, policymakers, journalists, students, anyone who asks accurate and easily digestible information about your work, you might want to create content yourself that feeds the AI systems. Have public-level versions of your work on your webpage, on the socials, on Substack etc. These will act as training anchors for future GAI tool updates, and the tools will also be able to retrieve information form them in real-time as people search.
A proportion of that 83% of individuals using GAI to search will be looking up scientific information, so having your complex topics presented in a simple way that AI can summarize accurately could go a long way toward addressing the scientific misinformation problem.
Summary
GAI is involved in every public conversation. It’s not a human stakeholder, but it functions like one because it shapes public interpretation, narrative velocity, and long-term memory.
Treating GAI as a stakeholder is our opportunity to a shape the stories it tells about us.
Additional Sources:
Molly McPherson and MuckRack, “Humans + Machines: Redefining Crisis Communication in the Age of Al”
Stuart Bruce, Purposeful Relations


