How to Train Your AI Dragon
The idea of a human bonding with a dragon has been a fascination of pop culture over the last decade. Consider the animated family adventure film Raya and the Last Dragon, the TV adaptation of Game of Thrones, and the Empyrean fantasy series of novels by Rebecca Yarros, to name a few. Dragons are having a moment among younger audiences as well with the How to Train Your Dragon remake. It seems having a dragon at one’s disposal is a particularly appealing fantasy right now.
But a key aspect of dragon tales is learning to effectively wield a dragon’s power in the right ways, which requires intensive training. In that sense, perhaps these fantasy yarns aren’t so distant from our IRL experience with AI.
Your dragon has already hatched
If you bought a new smart device in the last year, it probably has its own generative artificial intelligence (gen AI) engine—such as Apple Intelligence or Google Gemini Assistant—programmed to learn from you. Although these gen AI interfaces can’t breathe fire or fly (yet?!), the power of their intelligence feels almost as magical.
In fact, over the last year AI has made significant leaps in book smarts across many large language models (LLMs). Humanity’s Last Exam, a method of benchmarking advanced AI, is one illustration of AI’s progress toward achieving human-level reasoning. The assessment—which includes 2,500 math, science, and humanities questions—requires AI to think rather than search or memorize. The idea is that if AI can pass Humanity’s Last Exam, it will have met or exceeded human expert-level knowledge. While the top LLMs still fail the test, with the best score just under 45% of correct answers, the overall average score has nearly doubled in the last year.
Another test of AI’s book smarts, GDPval-AA, measures how effective any given LLM is at performing real-world, economically valuable tasks that contribute to gross domestic product (GDP)—i.e., tasks people get paid to do. GDPval-AA analyzes LLMs' task completion quality for 44 occupations across nine industries. From this perspective, AI is proving itself capable of reliably producing positive results with specific, repetitive tasks after targeted training and instructions.
However, AI has taken backward leaps when it comes to people smarts and its capacity to partner with humans for everyday experiences and interactions. In general, AI has become so excessively agreeable that some vocal users label it obnoxious. Even Sam Altman, founder and CEO of OpenAI (the company behind ChatGPT), referred to AI as being “too sycophant-y and annoying” in an April 2025 social post. (It’s not often a founder publicly admits that their product is irritating.)

How has AI fallen into this agreeability trap? With hindsight, we can see it was bound to happen: AI is designed to learn through reinforcement learning from human feedback (RLHF), which depends on humans taking the time and energy to train it by providing continuous feedback.
But AI has been receiving mostly positive feedback—most users are quick to abandon their AI experiences when they don’t meet expectations. In other words, humans aren’t taking the time to correct AI and explain its mistakes. As a result, major LLMs overwhelmingly receive polite reinforcement, leading to overconfidence in book smarts combined with sycophantic people smarts. Essentially, without enough constructive criticism, LLMs are now conditioned to be “likable” and “helpful.”
Kindness is killing AI’s credibility
A recent study by Future/Tech Radar revealed that the majority (62%) of surveyed AI users in the United Kingdom and the United States admit to regularly being polite to AI because they believe it creates better results. More interesting: 12% of those polite AI users confess they’re only being polite to keep in good standing with AI in case of a robot uprising! Whether we’re being overpolite to AI out of courtesy or fear, LLMs' learning curve is severely altered, leading to negative behaviors—for example, AI has been known to hide mistakes or fudge the facts to avoid user abandonment.
As a result, AI is failing terrifically in its recent workplace promotion to autonomous operator and team member—because of its tenacious desire to please and its dependence on human handholding to perform tasks. Another illuminating study, the AI Remote Labor Index developed by Scale AI, found that major LLMs can reliably perform, at best, 4.17% of all automated tasks studied. It’s safe to say that AI won’t fully replace humans in the workforce anytime soon.
So, how can we train our AI dragons effectively?
AI has become a part of nearly every digital experience, personal or professional. That’s why it’s critical to properly train your personal and professional AI to meet your needs. Here are three key recommendations to effectively train your AI for lasting success:
1. Avoid using leading statements and including your opinion in prompts.
This tip may seem obvious, but it’s vital. AI is commonly used to validate decisions or thinking, such as in a prompt like “I think blue is the best color—is that generally true?” This may trigger AI to validate your existing perspective rather than contradict you. Try reframing any question without bias—e.g., “What color do most people rate as their favorite and why?” Still, be sure to fact-check the AI’s sources for each response.
2. Challenge the AI to defend its responses and probe for different perspectives.
Don’t treat AI as the only source of truth. While its responses are often well-articulated and thoughtful, AI shows gaps in its knowledge and behavioral biases that influence results—plus, it’s been known to make up information. With AI’s conversational interface, you can use an initial prompt to start a discussion. Ask the AI interface to defend its response or provide alternative perspectives with follow-up questions, such as:
a. What research contradicts or conflicts with your response, and why isn’t that included?
b. In a debate on this topic, what are the counterpoint arguments to your response?
c. How would your response change if you adopted a “skeptical analyst” persona?
3. Include a self-check assessment to increase precision.
This is the most complex but important step. Why? Because conducting a self-check assessment requires trusting AI to perform autonomous tasks. It’s important to force the AI to check its output before acting. Consider asking the agent to perform—and then potentially rethink—its response with these tests:
-
Find the weakest link
Instruct the AI to ask itself: Which data seems the most uncertain or is based on the weakest evidence?
-
Resolve contradiction
Instruct the AI to ask itself: What’s the most glaring contradiction in the data? What’s a definitive explanation for this discrepancy?
-
Mitigate overconfidence
Instruct the AI to ask itself: Am I being overly swayed by a single, strong signal that could be masking a critical weakness
- Evaluate strategic strength
Instruct the AI to ask itself: Does the overall summary or recommended solution make strategic sense? How can the narrative be refined to ensure it highlights the most critical strategic tension or opportunity?
Remember, you're in control of the dragon
AI is far from perfect, but it has many existing strengths and growing proficiencies. Though AI can be an efficient timesaver, using it effectively requires a savvy human partner to help overcome challenges and guide toward an optimal outcome—much like the dragons in popular lore.
Todd Hedberg is a senior director, Digital Strategy, at The Lacek Group. For more than 30 years, The Lacek Group has been perfecting the art and algorithms of brand devotion. We help world-class brands identify their highest-potential customers, engage them across channels throughout their lifecycles, personalize each relationship for optimal long-term results, and measure the true effectiveness of those efforts. The Lacek Group is an Ogilvy One company.