How opinion researchers can stay human in an AI-enabled world
There is no doubt that the widespread adoption of AI is manifesting broader changes in the way many of us work. Though, as an opinion researcher, it is tempting to talk about what everyone else thinks about AI, it’s also important to reflect on how the research industry is being changed by this emerging technology.
There is a steady stream of new generative AI tools to help us with tasks at every stage of the research lifespan, including questionnaire design, open-ended coding, and report creation. Synthetic participants for quantitative and qualitative research are now part of the marketplace.[1] Learning to incorporate these tools into our workflow is necessary if we wish to stay relevant, but we also need to be thoughtful and deliberate about how we do this. Much has been made of the risks and ethical conundrums[2] that come with AI adoption, and by now we have all seen examples of blunders[3] that can happen when AI output is published or implemented without careful human review.
Let’s take a beat to consider, broadly, how this technology works. The most popular tools we refer to as “AI” are based on a technology called the large language model (LLM).[4] This engine drives generative AI by analyzing statistical relationships between words and sentences. This analysis becomes the basis of artificial intelligence that can generate and process language. LLM-driven tools are flexible and customizable; the more language they are given to analyze, the more specific and accurate they become. An LLM’s ability to speak our language is what makes it not only useful, but impressive and appealing on its face; many people find that working with an LLM chatbot feels like connecting with a helpful person who is eager to learn.
This brings us to one of the major pitfalls of AI use in research consulting. Because an LLM works by predicting what should come next, based on what parts of language usually go together everywhere else, the material it generates is, by design, an amalgam of the average. The generic tone of AI output is sufficient when what you’re looking for is a quick summary of a topic on your own computer screen, but this output often feels inauthentic and uninteresting.[5] Though being boring is sin enough, there are real risks behind all this blandness. To succeed, researchers must aim higher than the mean.
Consider the reputational risks. Blithely accepting AI output as a substitute for thoughtful content can leave you and your organization vulnerable. It isn’t just big blunders and ridiculous hallucinations that put your credibility in peril – the public is becoming savvy to certain traits that are associated with AI-generated content. And though we may not be as good at detecting it as we think[6], accusations of careless AI use can become a distraction, or even a disaster that detracts from your public image.
There is also the risk of making yourself invisible in a competitive marketplace. If you ask your favourite AI platform to help you create marketing content, it’s going to reference accessible content from competitors and clients. This can be helpful as a means of gathering intelligence on the marketplace you operate in, but prompting AI to generate new content for you will result in text that doesn’t stand out,
because it’s a distilled blend of everything else out there. There’s a bit of an irony here as well. In an environment where search engine optimization (SEO) tactics are rapidly shifting to account for AI visibility, generic writing will quickly make you invisible to the AI platforms the public uses to curate internet content.
As an insights professional employing AI tools, there is also a risk to the relevance and credulity of your insights. The new generation of AI tools for research can be life savers for busy consultants working with fast timelines and tight budgets, but they are not a suitable replacement for close attention to detail or the intelligent interpretation of research results. If you are letting an AI tool create your reports without oversight and revision, you risk delivering findings that are irrelevant or inaccurate.
Along similar lines, there is also a risk that without your expertise added in, deliverables generated with AI will simply overlook the gold nugget of wisdom that would have elevated your work from being merely a data point, to being a beacon of business intelligence for your client.
So how do we, as researchers, embrace this technology without losing ourselves in it?
First, rest assured that it’s not too late to catch up. As a member of Generation X whose first opinion research industry job was doing telephone interviews, I have lived through many phases of technological advancement in this industry, and I intend to live through this one too.
Here are a few strategies I would recommend to my industry colleagues who are also testing the AI waters:
- Train yourself to train the AI. AI tools aimed at the general public may seem easy to use with a quick question or single prompt, but their real power is in the training. Take time to learn about prompt writing, framing your issue, iterative refinement, and reusable workflows. The work you do with AI will be more robust and higher quality if you understand what AI is good for, and where its limitations are.
- Review outputs carefully. Nothing AI generated should be going out the door without careful scrutiny. Check it for quality and accuracy, and remind your colleagues to do the same. Think of your preferred AI platform much like you would view the new star intern; bright and poised, but also green and not always aware of its own mistakes.
- Keep your own mind sharp and fresh. As much as you possibly can, write without AI’s help. Interpret research findings yourself, using your own knowledge and experience as the foundation. This is good for your own brain, and it will ensure that your work remains authentic, original, and intelligent.
- If your company doesn’t already have a detailed AI policy, it’s time to start drafting one. As this technology becomes more embedded in our lives, regulation and the demand for accountability will increase. Be sure that you can tell clients, succinctly, how AI is and is not used in your organization, and what guardrails you have in place.
As it was with smartphones, washing machines, and steam-powered looms, I suspect we won’t know where this is all headed until we’re well into the next technological upheaval. I’m curious to know how my peers are navigating this change. What AI function has become invaluable to you? What guardrails are you putting in place?
Though I did use an LLM-enabled search engine to help me research this article, I did not use AI to write any of it!
[1] https://www.forbes.com/councils/forbesbusinesscouncil/2025/08/04/applying-ai-powered-synthetic-respondents-in-market-research/
[2] https://www.unesco.org/en/artificial-intelligence/recommendation-ethics/cases
[3] https://fortune.com/2025/11/25/deloitte-caught-fabricated-ai-generated-research-million-dollar-report-canada-government/
[4] https://www.ibm.com/think/topics/large-language-models
[5] https://www.nytimes.com/2025/12/03/magazine/chatbot-writing-style.html
[6] https://whyy.org/segments/how-not-to-be-mistaken-for-a-chatbot/
