Learning CenterAI-Human Collaboration

AI-Human Collaboration in Support: Building Handoffs That Actually Work

Insights from Fin Team
AI-Human Collaboration in Support: Building Handoffs That Actually Work

The promise of AI in support is not full automation: it is the right work going to the right place. Some queries benefit from AI resolution: consistent, instant, available at any hour. Others require human judgment: escalations involving legal risk, customers in distress, complex technical issues with ambiguous root causes. The teams getting the most out of AI Procedures are not the ones trying to automate everything. They are the ones who have designed the handoff between AI and human as carefully as they have designed the automation itself.

This guide is for support leaders and operations teams building Procedures-based automation at Intercom. It covers when AI handoffs to humans should happen, how to structure them so human agents receive everything they need, what to train your team on, and how to measure whether the collaboration is working.

The Case for AI-Human Collaboration

Pure automation fails on the queries that matter most. A prescription refill Procedure can handle the routine request, but it should not attempt to advise a patient on drug interactions. An account troubleshooting Procedure can collect diagnostic information and run standard checks, but it should not be the final word on a complex technical issue that may require a product team escalation.

Pure human support does not scale. At 59% average resolution rate for teams using Fin, the queries that Fin resolves are queries that would otherwise consume agent time — often simple to moderately complex requests that represent significant volume but require no genuine human judgment.

The productive position is in between. AI Procedures handle the repeatable, process-driven tier of complex queries. Human agents handle everything that requires judgment, relationship management, or sensitivity the AI cannot match. The quality of the transition between these two modes determines whether the combined system outperforms either one alone.

How Fin Procedures Enable Collaboration

A Fin Procedure is not just an automation: it is a data collection and reasoning system that runs before a human agent sees the conversation. When a Procedure escalates, the human agent receives a conversation that already contains:

  • The customer's stated issue, as understood and confirmed by Fin
  • Any information collected during the Procedure steps (account details, order numbers, evidence of a problem)
  • The results of system lookups Fin performed (current plan, subscription status, order state)
  • The specific point at which the Procedure determined human involvement was needed and why
  • A structured note summarizing the above

This handoff context transforms what the human agent needs to do. Instead of starting the conversation over, asking the customer to re-explain their issue, and manually looking up their account, the agent reviews the note, has the full picture, and focuses on the step that actually requires human judgment.

The difference in handle time is significant. When Fin collects the diagnostic information for an account troubleshooting query before escalating, the human agent's effective work is reduced to the judgment call at the end of the process — not the entire process.

Building Effective Handoff Protocols

Handoffs work when they are designed explicitly. A Procedure that escalates without a handoff plan transfers volume to the human queue without transferring context. The following steps produce handoffs that work.

Step 1: Define escalation triggers precisely

Before writing the Procedure, identify every scenario where Fin should hand off rather than resolve. Common triggers include:

  • The customer explicitly requests a human agent
  • The customer's situation involves a policy exception that requires manager approval
  • The query type involves legal, compliance, or billing dispute resolution above a threshold
  • Fin has made a specified number of unsuccessful attempts to resolve the issue
  • The conversation sentiment indicates distress or escalation risk
  • The system data retrieved indicates the customer falls into a category requiring white-glove handling (enterprise plan, flagged account, etc.)

Write these triggers into the Procedure explicitly as Condition steps. "If the customer has an Enterprise plan AND is requesting a cancellation, escalate to the account management team."

Step 2: Design the handoff note

The handoff note is the single most important output of a Procedure escalation. When Fin reaches an escalation end step, it should add a structured note to the conversation that contains:

  • What the customer asked for: a one-sentence summary of the original request
  • What Fin collected: all data gathered during the Procedure, including system lookups
  • What Fin found: the result of any system checks or eligibility evaluations
  • Why escalation was triggered: the specific condition that caused Fin to stop
  • Recommended next step: what the human agent should do based on Fin's findings

Write this note format into the Procedure explicitly. Do not leave it to Fin's general reasoning to determine what to include. A note that says "Customer wanted help with their account" is not useful. A note that says "Customer is requesting a refund for Invoice #12345 ($299). Account verified. Refund policy allows refunds within 30 days of charge. Invoice is 34 days old, which is outside policy. Customer's plan tier is Enterprise. Recommend reviewing with account manager before declining." is useful.

Step 3: Route to the right team

Define which team receives each escalation type. A billing dispute escalation routes to the billing team. A technical issue escalation routes to the technical support tier. An enterprise customer escalation routes to the account management team.

Configure this routing in the Procedure's escalation end step. Fin should not route all escalations to a generic queue and leave human agents to sort them.

Step 4: Set expectations with the customer before handing off

When Fin escalates, it should tell the customer what is happening and what to expect. "I have collected all the details I need. I am passing you to our account team now. They will have everything we just discussed and will follow up within 2 hours." This is preferable to a silent transfer that leaves the customer wondering if the conversation is being abandoned.

Write this communication step into the Procedure before the escalation end step.

What Your Human Agents Need to Know

Agents who receive Procedure handoffs work differently from agents who handle conversations from the start. Teams that do not prepare agents for this difference underutilize the value Fin creates.

Agents need to understand what Fin has already done. If Fin ran a subscription lookup and found the customer is on the basic plan, the agent does not need to run the same lookup. They pick up from where Fin left off. This requires agents to read the Fin note before responding — a habit that should be part of handoff training.

Agents need to know when Fin is still in the conversation. After a handoff, Fin exits the Procedure but may still be available to the customer for general questions. Agents should understand when they are the sole point of contact versus when Fin may continue handling informational queries while the agent handles the resolution step.

Agents need feedback loops. When agents handle a Procedure escalation and notice that Fin collected incorrect information, asked the wrong questions, or escalated when it should have resolved, that feedback needs a path back to the team managing Procedures. Establish a standard way for agents to flag bad handoffs — a tag, a Slack channel, or a weekly review — so Procedure quality improves over time.

Agents need calibration on their role. The objective is not for agents to compete with or second-guess Fin. It is to handle the specific slice of work that requires human judgment. Agents who understand this framing work more efficiently and experience less friction with the AI agent.

Measuring Collaboration Quality

Automation rate measures volume. Collaboration quality measures whether the combined AI-human system is producing better outcomes than either would produce alone.

MetricWhat it measuresHow to interpret it
Escalation handoff completenessWhat percentage of escalations include all required note fieldsBelow 90% indicates Procedure design issues in the escalation step
Post-escalation handle timeHow long human agents spend on Procedure-escalated conversations vs. direct conversationsShould be lower for Procedure-escalated; if not, the note is not usable
Escalation resolution rateWhat percentage of escalated conversations result in resolution at the human tierLow rates indicate escalation triggers are miscalibrated — Fin is escalating cases it should resolve
Appropriate escalation rateWhat percentage of escalated conversations human agents judge as "correctly escalated"Review a sample weekly; target 85%+
Customer effort score (escalated)How much effort customers report experiencing when Fin hands off to a humanHigher than baseline suggests handoff friction — customer is repeating themselves

Common Failure Modes

Escalation notes are too vague to be useful. When Fin escalates with a generic note, agents must restart the conversation. Audit a sample of escalation notes monthly. If agents are re-asking questions that Fin should have answered, the note format needs to be more specific.

Escalation triggers are too aggressive. When the escalation rate is higher than the team planned, the most common cause is overly conservative trigger conditions. Review the conditions that fire escalations and test whether Fin could resolve those cases with better instructions or additional system access.

Agents are not reading the handoff note. If handle time for Procedure-escalated conversations is the same as for direct conversations, agents may not be using the context Fin collected. Address this through team calibration — show agents a side-by-side comparison of conversations where the note was used versus ignored and the resulting handle time difference.

No feedback loop from agents to Procedure owners. Agents see Procedure failures in real time. Without a structured path for that feedback to reach the team managing Procedures, the same errors repeat. Build the feedback loop before going live.

Key Takeaways

  • The best support operations use AI Procedures and human agents in defined, complementary roles, not as competitors
  • Handoff quality determines whether Procedure automation creates value or creates a worse experience than handling the query manually from the start
  • Effective handoff protocols require explicit trigger conditions, structured note formats, team-specific routing, and customer communication before transfer
  • Human agents working with Procedure handoffs need training on how to use the context Fin provides, how to flag bad handoffs, and how their role differs from handling conversations from scratch
  • Measure collaboration quality separately from automation rate — resolution volume and handoff quality are both indicators of a healthy AI-human support operation