Users often refer to these reports as Auto QA reports or Auto Sampling reports.
In Enthu, both terms point to the same reporting area that shows AI-based evaluation results for sampled calls.

This article explains what these reports show and how to read them.


What these reports are

Auto QA / Auto Sampling Reports show:

  • Calls evaluated by AI using scorecards

  • Quality performance across teams and agents

  • Section-level and question-level AI QA scores

They help you understand how AI evaluations are performing, not just how many calls were sampled.


Where to find Auto QA / Auto Sampling Reports

Go to:
Menu Bar → Auto Sampling Reports

You’ll see three tabs:

  1. AI Evaluation Dashboard

  2. AI Evaluation Report

  3. Section Level Report

All three together make up what users commonly call Auto QA reports.


1. AI Evaluation Dashboard (High-level Auto QA view)

This is the summary view users usually mean when they say “Auto QA report”.

What it shows

  • Total AI Evaluated Calls

  • Average QA Score

  • Average Evaluated Calls per Agent

Quality distribution

Calls are grouped into score ranges:

  • < 70%

  • 70–90%

  • > 90%

This helps quickly understand overall AI QA performance.

Note: This criteria is customisable in the profile settings then evaluation settings. 


2. AI Evaluation Report (Week-on-week Auto QA report)

This view shows Auto QA performance over time.

How it’s structured

  • Scorecard → Team → Agent

  • Week-on-week average AI QA scores

When users call this an “Auto sampling report”

They usually want to:

  • Compare weeks

  • Check consistency

  • See which agents or teams improved or dropped


3. Section Level Report (Detailed Auto QA breakdown)

This is the deep-dive Auto QA report.

What it shows

  • Individual Auto QAed calls

  • Overall AI QA score per call

  • Section-wise and question-wise scores

  • Clickable call links

Users usually come here when they ask:

“Why is the Auto QA score low?”


Important clarification (keep this simple)

  • Auto QA / Auto Sampling reports show AI evaluation results

  • They do not show sampling coverage or rule execution

  • Calls appear here only after Auto QA runs

  • Auto-sampled Auto QA calls must be manually submitted to be finalized


Common user statements → What they usually mean



User says

They usually want

Auto QA report

AI Evaluation Dashboard

Auto Sampling report

AI Evaluation Report

AI score details

Section Level Report


When to use these reports

Use Auto QA / Auto Sampling Reports to:

  • Track AI QA performance

  • Monitor quality trends

  • Identify weak sections or questions

  • Review AI-evaluated calls

For sampling coverage, rule execution, or call counts, refer to Sampling configuration, not these reports.


When to contact support

Contact support if:

  • Auto QAed calls don’t appear in these reports

  • Scores don’t update after submission

  • Call links are missing or broken