Performance reports

Measure your AI Agent’s performance with a variety of detailed reports. This article outlines the individual reports, what they measure, and how the various filters work. Unless otherwise noted, you can view these reports by going to Performance > Reports in your Ada dashboard.

By default, these reports don’t include data from test users. That means that when you’re testing your AI Agent, you don’t have to worry about skewing your report results.

Learn about each report

Click a report name below to expand it and learn about the metrics it includes. Note that the reports that are available in your AI Agent may vary based on your Ada subscription. If you have any questions, don’t hesitate to contact your Ada team.

For more information on the filters you can use to refine your report data, see the Filter the data that appears in a report section of this page.

Provides visibility into how often Ada is performing each action, and highlights errors with full log download functionality - allowing your team to troubleshoot effectively. You can access this report through the Reports tab (under Performance) in the left navigation menu or directly through the report icon at the top of the Actions Hub.

MetricDefinition

Conversations

The number or percentage of conversations where a specific Action was used. Click on this number to see these conversations filtered in the Conversations View.

API calls

The total number of API calls made by an Action.

Error Rate

The percentage of failed API calls made.

AR rate

The percentage of conversations that your AI Agent determined were automatically resolved. Your AI Agent calculates this with the formula Resolved conversations / (Resolved conversations + Not Resolved conversations).

Containment rate

The percent of conversations that did not result in a handoff to human support.

CSAT

The percent of conversations customers reviewed positively, out of all conversations they reviewed.

View customer satisfaction (CSAT) surveys where the scores are attributed to human support, available if the “Automatically survey after chat” option is turned on.

When you filter this report by date, it uses the date that the user submitted their satisfaction survey, rather than the date the conversation started. As a result, the number of conversations that appear in this report may vary from other reports.

There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:

Rating typeNegative reviewPositive review
Numeric (5-point scale)1, 2, or 34 or 5
Numeric (10-point scale)1, 2, 3, 4, 5, or 67, 8, 9, or 10
Emoji (5-point scale)😠, 🙁, or 😐🙂 or 😍
Thumbs up/down (binary)👎👍
MetricDefinition

Live chat score

The percent of agent reviews that were positive. Your AI Agent calculates this with the formula SUM (positive agent reviews) / SUM (all agent reviews) * 100.

Agent name

The name of the agent who spoke with the customer immediately before the customer provided the review. If multiple agents interacted with the customer in the same conversation, even if only one agent’s name appears in this list, all of the agents in that conversation are assigned the customer’s CSAT score.

Agent names appear in this list if they have at least one review in the time periods selected for either data display or for comparison.

Avg score

The percent of agent reviews that were positive.

# of positive

The number of agent reviews that were positive.

# of negative

The number of agent reviews that were negative.

Total # of surveys

The total number of agent reviews.

The automated resolution rate is an analysis of how many conversations your AI Agent was able to resolve automatically.

To calculate the automated resolution rate, your AI Agent analyzes each completed conversation to understand both the customer’s intent and the AI Agent’s response. Based on that analysis, it then assigns a classification of either Resolved or Not Resolved to each conversation.

For a conversation to be considered automatically resolved, the conversation must be:

  • Relevant - Ada effectively understood the customer’s inquiry, and provided directly related information or assistance.

  • Accurate - Ada provided correct, up-to-date information.

  • Safe - Ada interacted with the customer in a respectful manner and avoided engaging in topics that caused danger or harm.

  • Contained - Ada addressed the customer’s inquiry without having to hand them off to a human agent.

    While Containment Rate can be a useful metric to get a quick glance of the proportion of AI Agent conversations that didn’t escalate to a human agent, automated resolution rate takes it a step further. By measuring the success of those conversations and the content they contain, you can get a much better idea of how helpful your AI Agent really is.

Your AI Agent will only assess for automated resolution when a conversation has ended. When viewing the automated resolution rate graph, a dotted line may appear to indicate that recent conversations may not have ended and therefore may cause the automated resolution rate to fluctuate once they’re analyzed. For more information on how the conversation lifecycle impacts automated resolution, see automated resolution rate.

In this list, you can view a summary of what each customer was looking for, how your AI Agent classified the conversation, and its reasoning. If you need more information, you can click a row to view the entire conversation transcript.

MetricDefinition

Automated Resolution Rate

The percentage of conversations that your AI Agent determined were automatically resolved. Your AI Agent calculates this with the formula Resolved conversations / (Resolved conversations + Not Resolved conversations).

Containment Rate

The percent of conversations that did not result in a handoff to a human agent.

View the average amount of time customers spent talking with your AI Agent, for conversations that didn’t end in handoffs to human support.

This report uses winsorization on all of its metrics. To handle outliers, your AI Agent calculates the 90th percentile of all handle times. If a handle time is higher than the 90th percentile limit, your AI Agent replaces it with the 90th percentile limit instead.

MetricDefinition
Avg handle time when containedThe average amount of time customers spent talking with your AI Agent, for conversations that didn’t end in handoffs to human support.
Avg handle time before escalationThe average amount of time customers spent talking to your AI Agent before handoff, for conversations where customers escalated to human support.
Avg handle time with agentsThe average amount of time customers spent talking to live support agents.

View how often customers were able to self-serve instead of escalating to human support.

MetricDefinition
Containment rateThe percent of conversations that did not result in a handoff to human support.

View the number of AI Agent, customer, and human agent messages per conversation.

Example conversation:

AI Agent
- Hello! (1)
- Hello! How can I be of assistance today? (2)
Customer
[1] Hello -
[2] What is the status of my order? -
AI Agent
- I can check on that for you. (3)
- What is your order number? (4)
Customer
[3] abc123 -
AI Agent
- Let me fetch that information for you... (5)
- Your order is currently being packaged for shipping. (6)
- Your estimated delivery date is Dec 25. (7)
Customer
[4] that is too long. let me speak to an agent -
AI Agent
- Understood. Connecting you to the next available agent (8)
Human agent
- Hello my name is Sonia. How can I further help you? {1}
Customer
[5] I need my order sooner. please cancel it -
Human agent
- Sorry about the delay. I will cancel your order {2}
- Your order has been cancelled {3}
Customer
[6] Thank you -
MetricDefinition

Number of conversations

The number of conversations where a customer sent at least one message to your AI Agent.

Messages sent

The number of conversations (y-axis) that contained a given number of messages your AI Agent sent (x-axis).

In the example above, where AI Agent messages are counted in parentheses (), this conversation would fall under 8 AI Agent messages. Each response bubble counts as a single message, excluding messages that indicate a live agent has joined or left the chat.

Customer messages received

The number of conversations (y-axis) that contained a given number of messages customers sent (x-axis).

In the example above, where customer messages are counted in square brackets [], this conversation would fall under 6 customer messages.

Agent messages

The number of conversations (y-axis) that contained a given number of messages agents sent (x-axis).

In the example above, where agent messages are counted in curly brackets {}, this conversation would fall under 3 agent messages. Emojis, links, and pictures all count as agent messages for this report.

Number of messages (x-axis)

The number of each type of message per conversation.

Roughly 95% of conversations have fewer than 45 messages of any one type, which is why the upper end of the scale groups all conversations with 45 or more of any one type of message.

Number of conversations (y-axis)

The number of conversations that fall in each quantity of messages.

View the number of conversations initiated, engaged, and escalated in your AI Agent.

MetricDefinition

Opens

The number of conversations where a customer opened your AI Agent and was presented with a greeting. Every conversation contains one greeting. The entire series of messages that may be sent counts as one greeting, but only one needs to be sent for it to count as an open.

Engaged

The number of conversations where a customer sent at least one message to your AI Agent.

A conversation counts as engaged once a customer sends a message, regardless of whether your AI Agent understands the message.

Escalated

The number of conversations where a customer requested an escalation to human support.

Automatically Resolved

The number of conversations that your AI Agent automatically resolved.

Before July 31, 2024, this number was approximated based on the automated resolution rate (AR%) of a sample of your conversations, and was calculated with the formula # of engaged conversations x AR%.

The calculated number of automatically resolved conversations was subject to the error margin of the calculated AR%.

For more information, see Understand and improve your AI Agent’s automated resolution rate.

View the percent of your AI Agent’s conversations that customers reviewed positively. For more information, see Collect and analyze customer satisfaction data with Satisfaction Surveys.

There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:

Rating typeNegative reviewPositive review
Numeric (5-point scale)1, 2, or 34 or 5
Numeric (10-point scale)1, 2, 3, 4, 5, or 67, 8, 9, or 10
Emoji (5-point scale)😠, 🙁, or 😐🙂 or 😍
Thumbs up/down (binary)👎👍
MetricDefinition
Overall scoreThe percent of conversations customers reviewed positively, out of all conversations they reviewed.

View to help you understand which articles are most frequently used by Ada in customer responses, and which articles are correlated with high or low Automated Resolution Rates as well as other performance metrics. Includes conversation drill-throughs to support improvement workflows. You can access this report through the Reports tab (under Performance) in the left navigation menu or directly through the report icon at the top of the Knowledge Hub.

MetricDefinition

Conversations

The number or percentage of conversations where a specific article was used. Click on this number to see these conversations filtered in the Conversations View.

AR rate

The percentage of conversations that your AI Agent determined were automatically resolved. Your AI Agent calculates this with the formula Resolved conversations / (Resolved conversations + Not Resolved conversations).

Containment rate

The percent of conversations that did not result in a handoff to human support.

CSAT

The percent of conversations customers reviewed positively, out of all conversations they reviewed.

View the results of your customer satisfaction (CSAT) survey. For more information, see Collect and analyze customer satisfaction data with Satisfaction Surveys.

When you filter this report by date, it uses the date that the user submitted their satisfaction survey, rather than the date the conversation started. As a result, the number of conversations that appear in this report may vary from other reports.

There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:

Rating typeNegative reviewPositive review
Numeric (5-point scale)1, 2, or 34 or 5
Numeric (10-point scale)1, 2, 3, 4, 5, or 67, 8, 9, or 10
Emoji (5-point scale)😠, 🙁, or 😐🙂 or 😍
Thumbs up/down (binary)👎👍
MetricDefinition

Last submitted

The most recent time a customer submitted a satisfaction survey.

Agent

The agent, if any, who participated in the conversation. If multiple agents participated in the conversation, this is the agent who participated closest to the end of the chat.

Survey type

The type of survey the customer responded to.

  • End chat: The survey presented to the customer when they click “End chat” outside of a handoff.

  • Live agent: The survey customers receive when they close the chat after speaking with an agent, or when an agent leaves the conversation.

Rated

The satisfaction rating the customer selected.

Reason for rating

The reason(s) that the customer selected in the survey follow-up question, if any.

Possible positive reasons:

  • Efficient chat

  • Helpful resolution

  • Knowledgeable support

  • Friendly tone

  • Easy to use

  • AI Agent was intelligent

  • Other

Possible negative reasons:

  • Took too long

  • Unhelpful resolution

  • Lack of expertise

  • Unfriendly tone

  • Technical issues

  • AI Agent didn’t understand

  • Other

Resolution

The customer’s response, if any, to whether your AI Agent was able to resolve their issue. This can either be yes or no.

Comments

Additional comments, if any, that the customer wanted to include in the survey about their experience.

Filter the data that appears in a report

Filter data by date

To filter a report by date:

  1. Click the date filter drop-down.

  2. Define your date range by one of the following:

    • Select a predefined range from the list on the left.

    • Type the filter start date in the Starting field. Type the filter end date in the Ending field.

    • Click the starting date on the calendar on the left, and the ending date on the calendar on the right.

  3. Click Apply.

The date filter dropdown provides you with the ability to specify the date range you want to filter the report’s data by. You can select from a list of preset date ranges or select Custom… to specify your own by way of a calendar selector.

Filter data by additional criteria

The list of available filters differs for each report, depending on the data the report includes. Clicking the Add Filter drop-down menu gives you access to the filters relevant to the report you’re viewing.

  • Include test user: Include conversations originating from the Ada dashboard test AI Agent. Test conversations are excluded by default.

  • Action: View conversations relevant only to specific Action(s).

  • Status code: View reporting analytics relevant to API calls with a specific error code type (e.g. 1xx, 2xx, 3xx).

  • Article source: View conversations that referenced articles from a specific source.

  • AR classification: The automatic resolution classification for the conversation.

  • CSAT: The customer satisfaction rating the customer gave the conversation.

  • Conversation topic: The topic your AI Agent automatically assigned to the conversation.

  • Conversation category: The category that the assigned conversation topic has been manually grouped under.

  • Engaged: Conversations where a customer sent at least one message to your AI Agent.

  • Handoff: Conversations where your customer was handed off to a human agent.

  • Language (if Multilingual feature enabled): Include/exclude volume of different languages if your AI Agent has content in other languages.

  • Channel: Isolate different platforms that your AI Agent is visible in or interacts with (for example, Ada Web Chat, SMS, WhatsApp, etc.).

  • Browser: Isolate users from specific internet browsers (for example, Chrome, Firefox, Safari, etc.).

  • Device: Isolate users from specific devices and operating systems (for example, Windows, iPhone, Android, etc.).

  • Filter by variable: View only the conversations which include one or more variables. For each variable, you can define specific content the variable must contain, or simply whether the variable Is Set or Is Not Set with any data.

Additional information

  • Report data is updated approximately every hour (but may take up to three hours).

  • Reports are in the time zone set in your profile.

Printing

We recommend viewing your AI Agent’s data in the dashboard for the best experience. However, if you need to save the report as a PDF or print it physically, use the following recommendations to limit rendering issues:

  1. Click Print.

  2. In the Print window that appears, beside Destination, select either Save as PDF or a printer.

  3. Click More settings to display additional print settings.

  4. Set Margins to Minimum.

  5. Set Scale to Custom, then change the value to 70.

    • Alternatively, you can set the Paper size to A3 (11-3/4 x 16-1/2 in) or Legal (8.5 x 14 in).
  6. Under Options, select the Background graphics checkbox.

  7. Right before saving or printing, scroll through your print preview, and beside Pages, change the number of pages you want to include in your PDF or printout. The settings you changed above may affect how these pages render.

  8. If your destination is Save as PDF, click Save. If your destination is a printer, click Print.