Interest in artificial intelligence (AI) tools is exploding across businesses and in popular culture. Users are just beginning to explore how to effectively leverage AI’s capabilities and vast potential.
Marketing professionals are monitoring how AI could transform loyalty and customer experience. One intriguing emerging opportunity is synthetic market research—AI-generated content that mimics human responses without relying on actual data from real users.
While it remains invaluable for gaining customer insights, the process of traditional research takes time. A study needs to be designed. The target audience needs to be identified. Participants must be recruited. After the study is fielded, the collected information needs to be analyzed. And if you’re seeking insights from global markets, the required time and effort are typically multiplied.
Large-language models (LLM)—AI tools trained using enormous datasets that can process and generate human-like text—can provide a useful way to get a head start on customer insights, especially when time is of the essence and absolute certainty isn’t necessary for an informed decision.
However, like most aspects of AI, it’s essential to follow some basic best practices when deploying LLMs for qualitative synthetic research. Here are three of the most important:
1) Remember, it’s all about the prompt!
You know the phrase “garbage in, garbage out”? It absolutely applies to using AI for idea generation. In the research context, whether you’re creating personas, running synthetic focus groups, or developing customer road maps, to get the best results, you need to provide solid direction.
Begin by clarifying the role of the person requesting the research, the exact deliverables you’re working to develop, and the format those deliverables should take. An executive or strategist may be looking for higher-level direction, whereas a program manager might be looking for more detailed executional findings. Clarifying the audience and deliverables provides context and can influence the level of granularity that is included in the response.
Think of the prompt as the start of a strong research brief. The prompt is the primary ask. A well-written prompt articulates goals, objectives, and deliverables, and it ensures the focus is clear. Are you looking for broadscale research, or are you interested in a very specific audience? The more precise you can be with your request, the more likely you’ll glean useful output for your project.
2) Provide more context for better results
Once you’ve got your initial prompt crafted, add additional details to help inform your output. Here, in the core of the creative brief, you can really get into specifics.
Indicate the brand or vertical. Where does it operate? What are its priorities? How does it differentiate itself? Who are its customers? All of these elements will influence the output you receive. Consider uploading any relevant documentation you want the LLM to consider. For example, an annual report that includes a footprint of physical store locations could be helpful because geography might affect results. What about any reports or news articles that provide insight about trends impacting the industry? Is the market growing or shrinking? What are the key drivers? Where is the opportunity? Similarly, do you have information on existing customer segments or profiles that should be taken into consideration when creating synthetic personas? These types of artifacts could shape your results in crucial ways.
While context is extremely important, be mindful of introducing bias, even in a synthetic setting. Your language and tone will be reflected in the results, so it’s important to apply the same best practices you’d use in a traditional focus group setting. For example, you need to remain neutral, ask open-ended questions, and draw out expanded responses to ensure all perspectives are considered.
3) Stay proactive about questioning the tool
Always keep in mind that, by design, AI platforms want to provide answers. They want to please you. This can lead to hallucinations—i.e., nonsensical, misleading, or flat-out wrong outputs stated as facts. Essentially, current LLM models are designed to fill in the blank—and some answers are more accurate than others.
You can mitigate the impact of this potential pitfall by asking clarifying questions of the LLM. If you uploaded reference documents to provide context, confirm the tool can read and interpret them. Invite the tool to provide a quick summary of the artifact or share key thoughts about it. These strategies will help you determine if the tool understands the context you provided. If the LLM can’t interpret the artifacts, provide additional context to keep the synthetic research on track.
It’s imperative to use human critical-thinking skills to evaluate the LLM’s output. If you’re designing personas and something doesn’t look quite right, challenge the output or ask for clarification about how the persona was derived. Similarly, when you run a synthetic focus group, request clarification on persona responses—just as you would if you were moderating a live session with human participants.
Synthetic research is a tool, not a replacement
LLMs can help you quickly generate preliminary profiles and hypotheses. Synthetic research can serve as a trial run to highlight anything in your research approach that might be unclear, providing an opportunity to recalibrate your interview guide before fielding it with actual customers.
As you consider how synthetic research acceded by LLMs could advance your marketing and loyalty strategies, remember that they’re fed with available internet data. So, if your audience is extremely niche, the robustness and accuracy (or lack thereof) of AI-generated responses will likely be affected. Responses from broader and better-documented personas will be more reliable.
Don’t underestimate the irreplaceable value of reading the vibe of the room in traditional interviews or focus groups. Valuable information that’s implied—or implicit in body language and tone—can’t be captured in transcripts generated by an AI tool. Human participants are more complex and nuanced than AI-based personas and provide more depth than is possible with current LLM tools.
The bottom line is that synthetic research has the potential to provide important, time-saving insights that, coupled with human critical thinking and interaction, contribute to strong decisions for moving your business forward.
Nina Rose is senior director, Loyalty and Membership Design, at The Lacek Group. For more than 30 years, The Lacek Group has been perfecting the art and algorithms of brand devotion. We help world-class brands identify their highest-potential customers, engage them across channels throughout their lifecycles, personalize each relationship for optimal long-term results, and measure the true effectiveness of those efforts. The Lacek Group is an Ogilvy One company.
Such as? Examples would be helpful here.