Agentic AI & the Hidden Cost of Derivative Intelligence

|
Coming out of ServiceNow’s Knowledge conference, I’ve found my AI hype batteries fully recharged. Everything was AI—ServiceNow, customers, vendors; everyone was buzzing about the promise of agentic AI. The big question: What AI agents should we prioritize to better serve our customers and enhance our solution offerings?
Unfortunately, the skeptic in me couldn’t help but notice what’s not being talked about enough: the care and feeding required to make AI work well and sustainably.
The Illusion of “Set It and Forget It”
After sitting through numerous presentations, two patterns became clear:
- Many of these AI solutions demand more than just initial training. We’re talking about coaching, guidance, tone correction (yes, even politeness), and ongoing performance audits. These agents aren’t a Ron Popeil style “set it and forget it” solution. They require continuous tuning, validation, auditing—consistent stewardship by real humans.
- As these AI agents are developed and trained from existing or created content, they essentially become a snapshot in time. From this point forward, without a consistent stream of new and different data they will simply be derivative, limited to regurgitating the same content. While this may not be an issue for very narrow or niche AI agents, however any that introduces generative AI capabilities may suffer.
This then got me thinking about something I saw online where someone put a photo of Dwayne “The Rock” Johnson into an AI image generator and instructed it to recreate the image exactly. The AI started—fairly accurate but with repeated copying and regeneration, the image degraded. Fast. Soon it resembled something more Picasso than People’s Champ.

Source This illustrates a key risk: AI systems can degrade or drift from their original intent if left unchecked.
The Challenge of Derivative AI
From a large language model (LLM) perspective, this idea is referred to as “model collapse”, when an AI runs out of fresh, diverse data and starts learning only from its own past outputs or synthetic content, it becomes… derivative.
The previous school of thought was to solve for this by focusing on smaller, domain-specific LLMs models tailored to curated, validated datasets to improve performance and consistency. This is a smart approach, but even smaller models face the same trap: once the dataset is consumed, any new content becomes regurgitated variations of the old. Fresh insight dies out.
For instance, I recently spoke to an organization building an AI agent that ingests assessment content, PowerPoint decks, and other consulting materials. The goal is to accelerate content generation for similar customers. But if that content becomes the only source for future work, you create a feedback loop. The innovation stops and the agent simply remixes, endlessly. Worse, will the content degrade/shift over time? Will the tone, style and insights fade?
I believe the answer is yes-and organizations need a strategy to deliver a consistent stream of new content as well as continuous auditing and oversight of these AI agents.
Rethinking Roles in the Age of Agentic AI
At RL Canning, we’re addressing this challenge head-on. As we roll out agentic AI to support service desk operations: automating ticket triage, executing playbooks, and driving self-service automation. We are also rethinking our human talent strategy.
Instead of eliminating roles, we’re elevating them.
We’re hiring people with broader skill sets and training them to:
- Audit AI responses for quality, accuracy, and tone
- Continuously coach and tune AI models
- Curate and refresh content libraries
- Write and update knowledge base articles
- Link AI systems to fresh, external sources of validated data
Because, if you’re not actively maintaining the AI’s diet of information, it becomes obsolete. Garbage in, garbage out—only now, it’s polite, confident garbage.
Intentional AI Stewardship
As we develop more and more agentic AI tools, curating both the AI and the data it learns from becomes a full-time job. This isn’t just a technical task—it’s a strategic one. It’s a new function within IT service organizations that will require resources, planning, and talent development.
The AI revolution won’t just change what we automate—it will change the skills we need, and where we need them.
We’re just beginning to scratch the surface of this conversation. So, if you’ve got insights, articles, or experiences related to curating AI and maintaining freshness in your models, I want to hear from you. Drop a comment or share a link. Let’s start the dialogue now before our AIs turn into abstract versions of The Rock.