Skip to main content

Monitors explained

Understand how Monitors help you evaluate Fin's conversation quality at scale, and how they work with Custom Scorecards.

Written by Alissa Tyrangiel
Updated today

What are Monitors?

Monitors help you continuously evaluate and improve Fin's conversation quality at scale. They give you a structured way to define which conversations should be reviewed, whether that's a random sample for baseline quality, or a targeted set based on higher-risk or higher-impact signals. This replaces ad-hoc sampling and spreadsheet-driven QA with a repeatable system that scales as volume grows.


How teams use Monitors

Teams use Monitors to maintain ongoing visibility into quality and focus attention where it matters most.

Common use cases include:

  • Reviewing a random sample to understand overall quality trends.

  • Focusing on higher-risk or higher-impact conversations, such as:

    • Low CX scores

    • Policy breaches

    • Legal threats

    • Other business-specific indicators

  • Tracking conversations tied to a specific initiative, like a feature launch, pricing change, or product update.

Monitors make it easier to detect patterns, surface issues earlier, and generate insights that can be shared with product, support, or leadership teams.


How Monitors work with Custom Scorecards

  • Monitors define what gets reviewed

  • Scorecards define how each conversation is evaluated

Scorecards can include criteria that are reviewed:

  • Reviewed manually

  • Evaluated using AI

  • Or a combination of both

You can associate a scorecard with a Monitor to automatically evaluate every matched conversation against defined criteria. Once selected, the scorecard runs as soon as the conversation is added to the Monitor, and results appear in the Monitor for reporting and review.

This ensures quality is assessed consistently, while still allowing flexibility in how reviews are performed.

Tip: Monitors can use Auto-review to skip manual checks entirely when AI scoring meets your quality standards meaning your team only needs to step in for failures or edge cases.


Manage reviews at scale

Instead of checking each Monitor individually, you can manage your team's workload through two centralized views on the Monitors page:

  • Unreviewed conversations: A unified queue for all conversations requiring manual review. This includes conversations where Al could not complete the scorecard or where a human reviewer is specifically assigned.

  • Fixes needed: Automatically captures any conversation that has been reviewed (by Al or a human) and marked with a failing status - for example, Reviewed + fix needed.


Coming soon

We’re expanding Monitors with more powerful ways to detect issues, measure quality, and take action. Upcoming improvements include:

  • Real-time alerts: Get notified when conversations in a Monitor cross defined thresholds or fail a scorecard.

  • Human agent QA: Apply Monitors and scorecards to teammate conversations—not just Fin.

  • Evaluation against your knowledge base: Score conversations against your support content and policies, helping ensure responses align with approved sources.


Get started

Ready to get started? Head over to how to create a Monitor for a step-by-step guide to creating your first Monitor and scorecard.

Note: Monitors requires the Pro add-on. Make sure your workspace has this before setting up your first Monitor.


💡Tip

Need more help? Get support from our Community Forum
Find answers and get help from Intercom Support and Community Experts


Did this answer your question?