Introducing Learning
The Learning section gathers real conversations that help Smart Assistant continuously improve through positive examples, negative cases, and fallback responses.
The Learning Conversations section is designed to enhance Smart Assistant’s performance and accuracy by collecting real interactions that contribute to model training and optimization.
It allows teams to review how the assistant performs in real-world scenarios, identify improvement opportunities, and mark conversations that serve as learning material.
There are three subsections within Learning: Positive, Negative and Fallback conversations.
Positive Conversations
Conversations that have been manually marked as positive by a manager from the conversation detail view. They represent accurate, well-handled, and successful interactions where Smart Assistant met or exceeded user expectations.
These examples are used as reinforcement data, helping maintain the quality and tone of the assistant’s answers.
Negative Conversations
Conversations manually marked as negative by a manager. They represent unsatisfactory interactions where Smart Assistant misunderstood the question, failed to answer correctly, or created user frustration.
These cases are used as training corrections, allowing the assistant to learn from its mistakes and refine intent detection or content.
Fallback Conversations
Conversations that include at least one fallback event, moments when Smart Assistant could not answer due to missing knowledge or uncertainty.
They are captured automatically to identify knowledge gaps or topics not yet covered by the assistant’s content base.
Period selection and filters
At the top of the page, you can select the Time Range you want to analyze. The available options are Last 24 hours, Last week, Last month, Last year, or a Custom range.
To refine your search, use the Filter panel to focus on specific parameters:
- Rated: displays only conversations that received user feedback.
- Claim of an agent: filters conversations that were taken over by a human operator.
- Client: shows conversations belonging to a specific identified user.
- Assigned to: allows filtering by the Agent responsible for the conversation.
- Language: limits results to conversations held in a particular language.
- Topic: filters by main subject of the conversation.
- Subtopic: refines the selection within a specific Topic , helping locate precise types of user requests.
Filters can be combined to narrow down results, for example: “English conversations about returns that were escalated to an agent and received a rating.”
Click Apply to refresh the list, or Reset to clear all filters and return to the full dataset.

Conversations table
The main table displays all archived conversations matching your selected criteria. Each row corresponds to one Conversation , providing both quantitative and contextual details.
| Column | Description |
|---|---|
| ID | The unique identifier assigned to each conversation, used for reference or export. |
| Start / End | Timestamps showing when the conversation began and ended. They help identify peak hours or long sessions. |
| Duration | Total time between the start and end of the conversation. Longer durations may indicate complex interactions or multiple exchanges. |
| Queries | The number of unique user questions detected in the conversation. Useful for understanding how many topics or intents were covered. |
| Messages | Total number of exchanged messages between the user and Smart Assistant (including both sides). This reflects interaction depth. |
| Rating | Represents the user’s satisfaction score, ranging from 1 to 5. Higher ratings indicate a more positive experience. If a red dot appears next to the score, it means the user has included an additional feedback comment along with their rating. |
| User | The name or alias of the person interacting with Smart Assistant. |
| User ID | The internal identifier assigned to that user within your system. |
| Escalated | Indicates whether the conversation was transferred to a human agent. |
| Assigned | Shows the agent or team responsible for handling the conversation, if applicable. |
| Language | The detected language of the conversation, useful for multilingual operations. |
| Location | Geographical origin inferred from user metadata. |
| Channel | The channel used for the conversation. |
| Device | The user’s device type, helping evaluate behavior per environment. |

The table supports sorting and pagination, allowing you to focus on the most relevant data.
Click any conversation ID to open its detailed view, where you can review the full message history, ratings, and associated metadata.
How to use this page
The Learning Conversations section is designed for continuous improvement of the Smart Assistant. By reviewing and classifying conversations, you can train the system to better understand user intent, detect weaknesses, and strengthen automated responses.
Positive conversations serve as learning examples for reinforcement, while negative and fallback cases help refine datasets, improve natural language models, and expand knowledge coverage.
Notes and Recommendations
- Review Fallback conversations regularly to identify missing or low-confidence intents.
- Use Negative conversations to detect recurring user frustrations or misunderstood topics.
- Mark successful Positive conversations as examples of correct assistant behavior and tone.
- Combine insights from this page with Fallback Rate, Containment Rate, and Satisfaction Rate metrics in the Conversation Statistics section to evaluate training progress.