Note: Monitors is available as part of the Pro add-on.
Monitors define which conversations get reviewed. You set the criteria, choose the reviewer, and attach a scorecard to evaluate quality. Once live, Monitors run automatically and surface matching conversations for your team to action. Monitors currently evaluate Fin AI Agent conversations only.
You need at least one scorecard before you can get the most out of a monitor. See creating and configuring scorecards if you have not set one up yet.
To access Monitors, go to Fin AI Agent > Analyze > Monitors. Click + Monitor to get started. Pick one of the templates or start from scratch.
Tip: You can attach a scorecard to a Monitor during setup, so having at least one scorecard ready before creating your first Monitor will save you time.
Required permissions
To view and configure QA Monitors and scorecards, teammates need two permissions enabled by a workspace admin.
Permission | What it allows | Who needs it |
Can view Fin and Automation settings | Day-to-day QA work — reviewing conversations, adjusting criteria values | All QA teammates |
Can create, edit and internally share Reports | Configuration access — creating, editing, and deleting Monitors, Scorecards, and evaluators | QA admins / leads |
Both permissions are needed for full QA configuration access. The first provides read access to the QA settings area, and the second provides write/edit capabilities for Monitors and Scorecards.
How to enable permissions
A workspace admin needs to enable these for each teammate who will work with Monitors.
Go to Settings > Teammates & Roles.
Select the teammate you want to configure.
Enable Can view Fin and Automation settings and Can create, edit and internally share Reports.
Save changes.
Note: If you're unsure which teammates need access, start with QA leads or support managers who will own Monitor and scorecard configuration.
Step 1: Choose conversations
Give your Monitor a name, then choose which conversations it should review.
Your Monitor can target:
A random sample — for example, a weekly sample of Fin conversations for baseline QA
A targeted set based on specific signals or risk — for example, all conversations where a customer shows signs of financial vulnerability
You can narrow down conversations using:
Precise filters — Resolution State, Topic, CX Score, and more
Flag criteria — natural language input that describes the types of conversations you want flagged. For help writing effective criteria, see how to write effective Monitor and Scorecard Criteria.
Note: A single conversation can appear in multiple Monitors. Each Monitor runs independently, so if a conversation matches more than one Monitor's criteria, it will be flagged in each. Clicking through to a conversation shows exactly why it was flagged by that Monitor.
Step 2: Choose a Monitoring mode
Select how the Monitor runs:
Continuous: runs ongoing, matching new conversations as they close and adding them automatically
One-time: backfill only, matching conversations from historical data. New conversations that close after setup are not included
Scheduled: runs on a recurring daily or weekly cadence, letting teammates review conversations on a regular schedule
Step 3: Select the start date
Choose when the Monitor should begin evaluating conversations. This lets you run QA on historical conversations from a specific point in time, while continuously surfacing new matching conversations from that date forward.
Note: When first creating a Monitor, you can backfill up to 90 days of historical conversations. From that point, the Monitor continues capturing new matching conversations automatically.
Step 4: Choose when conversations are added
Control when a conversation is matched to the Monitor. This determines when the Monitor evaluates the conversation — and, if a scorecard is attached, when that scorecard runs.
Fin is done — conversations are added once Fin has fully completed handling them (resolved, escalated, or followed up with no customer reply)
Conversation is closed — conversations are added only after the conversation is closed, either by a teammate or by Fin
Use this setting to align evaluation timing with your workflow — whether you want to assess Fin immediately after it finishes, or only once the conversation is officially closed.
Step 5: Choose the reviewer
All conversations that match the Monitor are automatically assigned to the selected reviewer, so reviews are routed consistently without manual coordination.
Note: If the attached scorecard has Auto-review enabled, the reviewer status will show as Auto-reviewed. These conversations will bypass the manual Unreviewed queue unless the AI detects a failure or cannot confidently score criteria.
In this example, the Monitor flags conversations with vulnerable customers, starts finding matches from the set start date set:
These conversations can be assigned to a reviewer to complete any manually scored criteria:
Step 6: Attach a scorecard
Associate a scorecard with the Monitor to automatically evaluate every matched conversation against defined criteria. Once selected, the scorecard runs as soon as a conversation is added to the Monitor, and results appear in the Monitor for reporting and review.
If you attach a scorecard after conversations have already been added to a Monitor, those earlier conversations will not be retroactively evaluated and will show no score. Only conversations added to the Monitor after the scorecard is attached will be scored.
Tip: Attaching a scorecard is what makes a Monitor truly useful — without one, conversations are flagged but not scored. If you have not created a scorecard yet, see creating and configuring scorecards.
Test your Monitor before turning it on
For Monitors that use natural language flag criteria, use the Test Monitor tool to validate your criteria against real conversations before creating or updating the Monitor. It shows which conversations would be flagged and highlights mismatches so you can refine the wording and reduce false positives or misses.
Tip: We strongly recommend testing every Monitor with flag criteria before turning it on.
In the Flag criteria section, click Run test or click the Test button on the top right.
Review sample conversations
For existing Monitors, this list is automatically populated with recent conversations that were flagged and not flagged by the Monitor. You can also paste additional conversation URLs or IDs to test specific edge cases.
Check the results
For each conversation, review the Monitor result (Flagged / Not flagged) and mark whether it is Correct. The evaluation summary shows your overall pass rate and highlights mismatches.
Note: A conversation must include at least 2 messages from Fin and 2 messages from the customer to be included in a Monitor.
Refine and retest
Update the Flag criteria description and rerun the test until the results accurately reflect what you want the Monitor to capture. Use the Refine wording button to let the AI automatically rephrase your flag criteria — this can help tighten the language and improve accuracy without rewriting criteria manually.
Once the Monitor has been created, it will start finding matches and appear on your Monitors page. You can always edit the configuration later if needed.
Managing reviews at scale
Instead of checking each Monitor individually, manage your team's workload through two centralized views on the Monitors page:
Unreviewed conversations — a unified queue for all conversations requiring manual review, including conversations where AI could not complete the scorecard or where a human reviewer is specifically assigned
Fixes needed — automatically captures any conversation that has been reviewed (by AI or a human) and marked with a failing status, for example Reviewed + fix needed
How to complete a review
Go to the Unreviewed queue on the Monitors page.
Select a conversation.
Fill in any missing scorecard criteria. AI-generated scores can be overridden by clicking the rating. Note that populating all scores does not automatically update the review status, you'll need to set it manually once scoring is complete.
Add a note if needed - for example, "this needs a content update".
Once all criteria are scored, set the review status manually to reflect the outcome - for example, Reviewed if no action is needed, or Reviewed + fix needed if a fix is required. Conversations marked as Reviewed + fix needed will automatically move to the Follow-up actions needed queue for your team to address.
AI-generated scores can be overridden by human reviewers if needed. Reviewers can filter the queue by their own name to focus on conversations assigned to them.
QA review status labels
Review status labels use a consistent Reviewed prefix to clearly distinguish the review outcome from the action needed.
Label | What it means |
Unreviewed | No review has taken place yet |
Reviewed | Review complete, no action needed |
Reviewed + fix needed | Review complete, a fix is required |
Reviewed + won't fix | Review complete, issue acknowledged but won't be actioned |
Reviewed + fixed | Review complete, fix has been applied |
Monitor constraints
Be aware of the following constraints:
A conversation must include at least 2 messages from Fin and 2 messages from the customer to be included in a Monitor.
Monitors do not support Fin Voice conversations.
Customer tickets and tracker tickets are not matched into Monitors — only conversations.
When first creating a Monitor, you can backfill up to 90 days of historical conversations.
You can create up to 20 monitors using natural language flag criteria. Monitors using predefined filters are unlimited.
Scorecards support up to 20 AI-scored criteria. Manually scored criteria are unlimited.
FAQs
Can the same conversation appear in multiple Monitors?
Can the same conversation appear in multiple Monitors?
Yes, each Monitor runs independently as a yes/no check, so if a conversation matches more than one Monitor's criteria, it will be flagged in each. Clicking through to a conversation shows exactly why it was flagged by that specific monitor.
If a conversation is evaluated and then reopens, will it be evaluated again?
If a conversation is evaluated and then reopens, will it be evaluated again?
No. A conversation is evaluated only once per Monitor. If the conversation later reopens because the customer sends a new message, it will not be re-matched or re-evaluated under the same Monitor version. The original evaluation is the only one recorded.
Do Monitors work for Fin Voice?
Do Monitors work for Fin Voice?
No, Fin Voice is not currently supported by Monitors.
Are tickets evaluated by Monitors?
Are tickets evaluated by Monitors?
No, neither customer tickets nor tracker tickets are matched into Monitors - only conversations.
Who can assign or change permissions?
Who can assign or change permissions?
Only workspace admins can enable or change teammate permissions. If you don't have admin access, ask your workspace admin to update your settings.
Do I need both permissions to review conversations?
Do I need both permissions to review conversations?
For day-to-day reviewing (reading conversations, filling in scorecards), you only need Can view Fin and Automation settings. The second permission (Can create, edit and internally share Reports) is required if you need to create, edit, or delete Monitors and scorecards.
What is the minimum conversation length for a Monitor to evaluate it?
What is the minimum conversation length for a Monitor to evaluate it?
A conversation must include at least 2 messages from Fin and 2 messages from the customer to be included in a Monitor.
Need more help? Get support from our Community Forum
Find answers and get help from Intercom Support and Community Experts













