2026 customer service planning series: Vol. 04

Learn how to build an operating model where every resolution improves the system, so that fewer issues repeat, quality compounds, and support becomes more scalable over time.

Once you’ve defined the right roles on your team, you need an operating model that makes progress an integral part of how things work and keeps the AI Agent improving over time.

At Intercom, we use a simple mantra to guide how we think about this: “The first time you answer a question should be the last.”

This is part four of our five-part series on customer service planning for 2026. We’ll be sharing all five editions on our blog and on LinkedIn.

If you’d rather have them emailed to you directly as they’re published, drop your details here.

We’re trying to build an operating model where every resolution improves the system, so that fewer issues repeat, quality compounds, and support becomes more scalable over time.

Getting this right takes intentional design. It takes clear ownership, guardrails that let you move quickly without risk, a way to feed insights back in, and a culture that embraces and celebrates the work, not just the outcomes.

Let’s break that down.

1. Start with clear ownership

One of the most common reasons AI performance plateaus is ambiguity.

When no one owns how the AI Agent performs, feedback gets lost, issues linger, and improvements stall. 

High-performing teams assign a single owner who’s responsible for making the AI Agent better by:

  • Reviewing resolution trends and identifying where the system is underperforming.
  • Making targeted updates to content, configuration, and behavior.
  • Coordinating with product and engineering on systemic blockers.
  • Setting improvement priorities, targets, and timelines.

That owner (often referred to as the AI ops lead) typically sits within support operations or grows out of an existing role. The title or team they sit on isn’t important. What matters is that they take clear ownership and have the authority to drive change.

Real-world example

At Dotdigital, AI performance plateaued after a strong start – resolving around 2,800 conversations per month for three consecutive months. To drive resolution rates up, the team created a dedicated support operations specialist role, filled by an experienced agent with deep product knowledge. This person will focus on refining snippets, improving content, and enhancing the AI’s resolution capabilities.

 

2. Make iteration fast and safe

As the AI Agent handles more volume and complexity, change might start to feel risky. And when teams hesitate to make changes, performance stalls.

That’s where lightweight governance comes in: a clear way to keep iterating without bureaucracy or endless approvals.

The teams that have developed a good rhythm with this put a few principles in place:

  • Everyone knows which changes need review, and which don’t.
  • Decision-makers are named.
  • Updates are tested (lightly but reliably) before they go live.
  • Feedback flows through one place, so it’s seen and acted on.
  • Progress happens on an agreed schedule (weekly reviews, monthly checkpoints, quarterly planning, etc.) not just when someone has time.

Real-world example

Anthropic ran a focused “Fin hackathon” sprint to improve their AI Agent’s resolution rate. The team audited unresolved queries, identified underperforming topics, and created or updated content to close gaps. They converted frequently used macros into AI-usable snippets, monitored Fin’s performance during live support, and continuously refined content based on real interactions. This structured approach enabled rapid improvement while maintaining quality standards.

Governance isn’t extra overhead or red tape. It’s what makes improvement routine and safe. When the path from insight to action is predictable, your AI Agent gets better every week and your support system keeps scaling with it.

 

3. Build a system that learns by default

AI performance isn’t static, but most teams treat it like a one-time implementation. The most successful organizations design systems that learn: they analyze where the AI Agent struggles, then feed that insight directly into structured improvement.

That might look like:

  • Reviewing common handoff points to humans.
  • Tracking unresolved queries by topic or intent.
  • Measuring resolution rate trends over time.
  • Using these signals to prioritize fixes or content upgrades.

Whether you follow a formal loop (like the Fin Flywheel framework) or something simpler, the goal is the same: make improvement inevitable.

 

4. Treat content as competitive infrastructure

Your AI Agent is only as good as what it knows. This makes content strategy a competitive advantage, not just a support function.

"That’s when we realized: AI doesn’t just come up with information out of nowhere, you have to feed it. We were spending all our time evaluating tools when we should’ve been focused on content." George Dilthey, Head of Support at Clay

You need to treat knowledge like infrastructure, where:

  • Every topic has a clear owner.
  • Content is structured, versioned, and ingestion-ready.
  • New products ship with source-of-truth content by default.
  • Changes are shipped on a schedule, not when someone finds time.

Real-world example

At Intercom, we’ve evolved our New Product Introduction (NPI) process by aligning early with R&D on a single, canonical source of truth that becomes the foundation for all downstream content – including what the AI Agent uses to resolve queries. By embedding content creation into launch readiness, not as an afterthought, we’ve consistently hit 50%+ resolution rates on new features from day one.

This infrastructure layer often determines whether teams scale confidently or stall out. Without it, every improvement is harder and AI performance remains inconsistent. With it, your AI Agent gets better every day – and the system compounds.

 

5. Make belief visible

Even the best system won’t keep improving if people stop believing in it. Belief will fade quietly if you don’t reinforce it. 

Keep it strong by:

  • Sharing specific wins regularly.
  • Highlighting improvements with metrics.
  • Recognizing the people behind those improvements and giving them space to lead.

This is about more than just team morale. It’s about keeping everyone aligned and excited about the bigger play you’re all part of.

 

Putting it all together

Building an AI-first support organization means having the right people and the right systems to support them.

When ownership is clear, iteration is safe, knowledge is reliable, and belief is visible, AI performance compounds. And as the AI Agent gets better, your entire support model gets faster and more scalable.

This is the foundation of a modern support organization.

Next week, we’ll take this one level deeper and explore how capacity planning changes when AI handles the majority of your work and your team moves into higher-value roles.

To follow along with the series and have each new edition emailed to you directly, drop your details here.