What is the difference between a Monitor and a Scorecard?
What is the difference between a Monitor and a Scorecard?
They do two different things and work together:
Monitors define which conversations get reviewed — you set the criteria (filters or natural language flag criteria), and the monitor automatically surfaces matching conversations.
Scorecards define how each conversation is evaluated — you set the criteria, rating options, and scoring rules that reviewers (or AI) use to assess quality.
A monitor without a scorecard will flag conversations but won't score them. For a complete QA workflow, you need both.
Are there limits on how many monitors or scorecard criteria I can create?
Are there limits on how many monitors or scorecard criteria I can create?
Yes, the following limits apply:
- Up to 20 monitors using natural language flag criteria. Monitors using predefined filters are unlimited.
- Up to 20 AI-scored scorecard criteria. Manually scored criteria are unlimited.
Can I have multiple monitors running at the same time?
Can I have multiple monitors running at the same time?
Yes, you can create as many monitors as you need, each targeting a different type of conversation or quality signal. For example, you might run one monitor for a random baseline sample, another for low CX score conversations, and another for conversations flagged as policy risks. Each monitor runs independently. Note that there is a limit of 20 monitors using natural language flag criteria — monitors using predefined filters are unlimited.
Can I edit a monitor after it has been created?
Can I edit a monitor after it has been created?
Yes, you can edit your monitor configuration at any time from the Monitors page. Changes apply going forward — previously matched conversations are not re-evaluated under the updated criteria.
Can I use a monitor for teammate conversations, not just Fin?
Can I use a monitor for teammate conversations, not just Fin?
Not yet, Monitors currently evaluate Fin AI Agent conversations only. Applying Monitors and Scorecards to human teammate conversations is on the roadmap.
Is there a way to pause a Monitor?
Is there a way to pause a Monitor?
No, if a Monitor is set to continuous mode, the only option is to delete it. To avoid this, set the monitoring mode to a one-time check or a scheduled Monitor with an end date.
Is there a way to duplicate a Monitor?
Is there a way to duplicate a Monitor?
Yes, click the three-dot menu on any Monitor from the Monitors page and select Duplicate monitor. The same menu also gives you options to Edit or Delete the Monitor.
Can I save a Monitor as a draft?
Can I save a Monitor as a draft?
No, not at the moment.
Do Monitors have access to the full workspace setup?
Do Monitors have access to the full workspace setup?
No, Monitors only have access to the conversation transcript. They don't have access to the Knowledge base, Guidance, Tasks, or Procedures. For example, a Monitor cannot check whether Fin had access to a specific article, or whether a conversation should have been passed to a human based on a guidance rule.
Is it possible to see when a Monitor was edited, what changed, and by whom?
Is it possible to see when a Monitor was edited, what changed, and by whom?
No, audit history for Monitor edits is not currently available in the UI.
Is it possible to use custom date ranges in the Activity view on the Monitor dashboard?
Is it possible to use custom date ranges in the Activity view on the Monitor dashboard?
No, the only available date filters are last 7 days, last 8 weeks, and last 6 months.
Is there an API endpoint for Monitor reporting data?
Is there an API endpoint for Monitor reporting data?
No, not at the moment.
Conversations and eligibility
What is the minimum conversation length for a monitor to include it?
What is the minimum conversation length for a monitor to include it?
A conversation must include at least 2 messages from Fin and 2 messages from the customer to be matched into a monitor.
Can the same conversation appear in multiple monitors?
Can the same conversation appear in multiple monitors?
Yes. Each monitor runs as an independent yes/no check, so if a conversation matches the criteria of more than one monitor, it will be flagged in each. Clicking through to a conversation from any monitor shows exactly why it was flagged by that specific monitor.
If a conversation is evaluated and then reopens, will it be re-evaluated?
If a conversation is evaluated and then reopens, will it be re-evaluated?
No. A conversation is evaluated only once per monitor. If the conversation later reopens because the customer sends a new message, it will not be re-matched or re-evaluated under the same monitor version. The original evaluation is the only one recorded.
Can I backfill historical conversations when creating a monitor?
Can I backfill historical conversations when creating a monitor?
Yes. When first creating a monitor, you can backfill up to 90 days of historical conversations. From that point, the monitor continues capturing new matching conversations automatically.
Do monitors work for Fin Voice?
Do monitors work for Fin Voice?
No. Fin Voice is not currently supported by Monitors.
Are tickets evaluated by monitors?
Are tickets evaluated by monitors?
No. Neither customer tickets nor tracker tickets are matched into monitors — only conversations.
What happens if I update a Monitor's flag criteria — do existing conversations get removed?
What happens if I update a Monitor's flag criteria — do existing conversations get removed?
No. Conversations that matched the previous flag criteria stay in the Monitor. Only conversations going forward are matched against the updated configuration.
How long does it take for conversations to appear in a Monitor?
How long does it take for conversations to appear in a Monitor?
Conversations should appear instantly. However, the conversation count on the Monitor dashboard can take a few minutes to update when there is a high volume of conversations.
If a conversation is incorrectly showing in a Monitor, can I remove it?
If a conversation is incorrectly showing in a Monitor, can I remove it?
No. It's not currently possible to manually remove a conversation from a Monitor.
Can I still run a test on a Monitor if I haven't set up flag criteria?
Can I still run a test on a Monitor if I haven't set up flag criteria?
No. The Run test option only appears when flag criteria have been configured.
How can I check why a conversation did or didn't match my flag criteria?
How can I check why a conversation did or didn't match my flag criteria?
Go to the Monitor, click Edit, and open the Run test section. Under Test flag criteria on sample conversations, enter the conversation ID and click Add. The Monitor result will explain why the conversation matched or didn't match.
Scorecards and scoring
Do I need a scorecard to use a monitor?
Do I need a scorecard to use a monitor?
No, but without one your monitor will flag conversations without scoring them. Reviewers will see matched conversations with no criteria to evaluate. For a complete QA workflow, attach a scorecard during monitor setup.
Can I attach different scorecards to different monitors?
Can I attach different scorecards to different monitors?
Yes. Each monitor can have its own scorecard, so you can evaluate different types of conversations against different quality criteria. For example, a monitor for vulnerable customers might use a different scorecard than a monitor for billing conversations.
Can I reuse scorecard criteria across multiple scorecards?
Can I reuse scorecard criteria across multiple scorecards?
Yes — criteria titles and descriptions are reusable. Once you have created criteria, you can add it to multiple scorecards. However, previous rating scores cannot be reused and will need to be set from scratch in each scorecard.
Can I mix AI-scored and manually scored criteria in the same scorecard?
Can I mix AI-scored and manually scored criteria in the same scorecard?
Yes. You can choose on a per-criteria basis whether AI or a human reviewer handles scoring. If Auto-review is enabled and any criteria requires manual scoring, those conversations will still appear in the Unreviewed queue.
What does marking criteria as critical do?
What does marking criteria as critical do?
If critical criteria receives a failing rating, the overall review score drops to 0% — regardless of how other criteria scored. This overrides all weights and the pass threshold. Not scored ratings do not trigger failure. Critical criteria are useful for non-negotiable standards like compliance requirements, safety or policy adherence, and escalation handling.
How do scorecard weights work?
How do scorecard weights work?
Weights are relative to each other, not fixed to a scale of 100. The total can add up to any number — what matters is the proportion each criteria contributes. For example, criteria with a weight of 25 out of a total of 50 contributes the same as one weighted at 50 out of 100. Weights only apply to criteria that are included in the review score.
Can AI-generated scores be overridden?
Can AI-generated scores be overridden?
Yes. Teammates can manually override any AI score if they spot a discrepancy. Simply click the rating in the conversation's Score tab to change it.
When reusing Scorecard criteria, can I also reuse its weightings?
When reusing Scorecard criteria, can I also reuse its weightings?
No. Only the name and description carry over. Weightings always need to be configured from scratch.
How does selecting N/A for Scorecard criteria affect the overall score?
How does selecting N/A for Scorecard criteria affect the overall score?
The criteria is excluded from weighting entirely. The overall score is calculated based on the remaining scored criteria only.
Where can I see review scores?
Where can I see review scores?
Scores are visible in the Monitor conversation list (showing overall score and individual criteria ratings as columns) and within each individual conversation under the Score tab.
Can I save a Scorecard as a draft?
Can I save a Scorecard as a draft?
No, not at the moment.
Is it possible to see whether a Scorecard was edited, and by whom?
Is it possible to see whether a Scorecard was edited, and by whom?
No. Audit history for Scorecard edits is not currently available.
Auto-review
What is Auto-review?
What is Auto-review?
Auto-review is a feature you can enable on a scorecard that automates the entire QA process. When turned on, if AI scores all criteria in a scorecard, the manual review step is skipped entirely. Conversations are marked as reviewed automatically, and only failures or edge cases require human attention.
When does Auto-review skip the manual review step?
When does Auto-review skip the manual review step?
Auto-review skips manual review only when AI can confidently score all attributes in the scorecard. If any attribute requires a human reviewer, or if the AI cannot confidently score an attribute, those conversations will still appear in the Unreviewed queue.
What happens when Auto-review detects a failing score?
What happens when Auto-review detects a failing score?
If the AI gives a failing score, the conversation is automatically marked as Reviewed + fix needed and routed to the Follow-up actions needed queue — no manual review step required. Teammates can still override the score if needed.
What reviewer status shows for Auto-reviewed conversations?
What reviewer status shows for Auto-reviewed conversations?
The reviewer status will show as Auto-reviewed. These conversations bypass the manual Unreviewed queue unless the AI detects a failure or cannot confidently score an attribute.
If a human changes an AI score, does the AI learn from it for future reviews?
If a human changes an AI score, does the AI learn from it for future reviews?
No. AI doesn't learn from human overrides at the moment.
Can I test a Monitor before turning it on?
Can I test a Monitor before turning it on?
Yes — and we strongly recommend it. For Monitors using natural language flag criteria, use the Test monitor tool to validate your criteria against real conversations before going live. It shows which conversations would be flagged and highlights mismatches so you can refine and reduce false positives.
Can I still run a test if I haven't set up flag criteria?
Can I still run a test if I haven't set up flag criteria?
No. The Run test option only appears when flag criteria have been configured.
How can I check why a conversation did or didn't match my flag criteria?
How can I check why a conversation did or didn't match my flag criteria?
Go to the Monitor, click Edit, and open the Run test section. Under Test flag criteria on sample conversations, enter the conversation ID and click Add. The Monitor result will explain why the conversation matched or didn't match.
Is there an API endpoint for Monitor reporting data?
Is there an API endpoint for Monitor reporting data?
No, not at the moment.
Reviews and workflow
What do the QA review status labels mean?
What do the QA review status labels mean?
Each status reflects where a conversation sits in the review workflow:
Label | What it means |
Unreviewed | No review has taken place yet |
Reviewed | Review complete, no action needed |
Reviewed + fix needed | Review complete, a fix is required |
Reviewed + won't fix | Review complete, issue acknowledged but won't be actioned |
Reviewed + fixed | Review complete, fix has been applied |
How do I find conversations assigned to me?
How do I find conversations assigned to me?
Go to the Unreviewed conversations queue on the Monitors page. From there, filter by your own name to see only conversations assigned to you across all monitors.
What happens when I mark a conversation as Reviewed + fix needed?
What happens when I mark a conversation as Reviewed + fix needed?
The conversation is automatically moved from the Unreviewed conversations queue to the Follow-up actions needed queue, where your content or QA team can action it. Once the fix has been applied, you can update the status to Reviewed + fixed.
Can I tag a teammate in a note under the Score tab?
Can I tag a teammate in a note under the Score tab?
No. Tagging teammates in score notes is not currently supported.
Permissions and access
What permissions do I need to use Monitors?
What permissions do I need to use Monitors?
Two permissions are required, both enabled by a workspace admin:
Can view Fin and Automation settings — for day-to-day work such as reviewing conversations and adjusting criteria values
Can create, edit and internally share Reports — for configuration access such as creating, editing, or deleting monitors and scorecards
Who can create and edit monitors?
Who can create and edit monitors?
Any teammate with both the Can view Fin and Automation settings and Can create, edit and internally share Reports permissions can create, edit, and delete monitors and scorecards. Teammates with only the first permission can review conversations and adjust criteria values, but cannot create or delete monitors.
Need more help? Get support from our Community Forum
Find answers and get help from Intercom Support and Community Experts
