List of archived conversations
The Archived Conversations page provides access to the full history of user interactions with Smart Assistant. It allows you to review how conversations evolved over time, evaluate user satisfaction, and monitor the assistant’s operational performance.
This section is especially useful for quality control, internal training, and identifying opportunities to improve automated responses or escalation handling.
A conversation may have been created several days ago, but if the user sends a new message, it will become ’live’ again. In this case, an archived conversation is automatically moved back to the Live Conversations section, allowing teams to continue the interaction without losing its history.
A Live Conversation that remains inactive for more than four hours, meaning no messages are exchanged during that time, is automatically moved to the Archived Conversations section. This ensures that the Live view only displays ongoing and actively managed interactions.
Period selection and filters
At the top of the page, you can select the Time Range you want to analyze. The available options are Last 24 hours, Last week, Last month, Last year, or a Custom range.
To refine your search, use the Filter panel to focus on specific parameters:
- Rated: displays only conversations that received user feedback.
- Claim of an agent: filters conversations that were taken over by a human operator.
- Client: shows conversations belonging to a specific identified user.
- Assigned to: allows filtering by the Agent responsible for the conversation.
- Language: limits results to conversations held in a particular language.
- Topic: filters by main subject of the conversation.
- Subtopic: refines the selection within a specific Topic , helping locate precise types of user requests.
Filters can be combined to narrow down results, for example: “English conversations about returns that were escalated to an agent and received a rating.”
Click Apply to refresh the list, or Reset to clear all filters and return to the full dataset.

Conversations table
The main table displays all archived conversations matching your selected criteria. Each row corresponds to one Conversation , providing both quantitative and contextual details.
| Column | Description |
|---|---|
| ID | The unique identifier assigned to each conversation, used for reference or export. |
| Start / End | Timestamps showing when the conversation began and ended. They help identify peak hours or long sessions. |
| Duration | Total time between the start and end of the conversation. Longer durations may indicate complex interactions or multiple exchanges. |
| Queries | The number of unique user questions detected in the conversation. Useful for understanding how many topics or intents were covered. |
| Messages | Total number of exchanged messages between the user and Smart Assistant (including both sides). This reflects interaction depth. |
| Rating | Represents the user’s satisfaction score, ranging from 1 to 5. Higher ratings indicate a more positive experience. If a red dot appears next to the score, it means the user has included an additional feedback comment along with their rating. |
| User | The name or alias of the person interacting with Smart Assistant. |
| User ID | The internal identifier assigned to that user within your system. |
| Escalated | Indicates whether the conversation was transferred to a human agent. |
| Assigned | Shows the agent or team responsible for handling the conversation, if applicable. |
| Language | The detected language of the conversation, useful for multilingual operations. |
| Location | Geographical origin inferred from user metadata. |
| Channel | The channel used for the conversation. |
| Device | The user’s device type, helping evaluate behavior per environment. |

The table supports sorting and pagination, allowing you to focus on the most relevant data.
Click any conversation ID to open its detailed view, where you can review the full message history, ratings, and associated metadata.
How to use this page
The Archived Conversations section is designed for both operational and qualitative analysis. Managers can track conversation trends, identify the most common topics, and verify how effectively Smart Assistant resolves queries or escalates to agents.
It is also a key area for auditing satisfaction ratings and ensuring compliance with service-level expectations.
Notes and Recommendations
- Use filters to isolate conversations from specific campaigns, products, or languages and compare satisfaction or escalation trends.
- Pay attention to Duration, Messages, and Rating together, long conversations with low ratings may indicate user friction.
- Combine insights from this page with Containment Rate and Fallback Rate from the Conversation Statistics section to evaluate overall assistant performance.