Fallback conversations
On this page
Conversations that include at least one fallback event, moments when Smart Assistant could not answer due to missing knowledge or uncertainty.
They are captured automatically to identify knowledge gaps or topics not yet covered by the assistant’s content base.
Period selection and filters
At the top of the page, you can select the Time Range you want to analyze. The available options are Last 24 hours, Last week, Last month, Last year, or a Custom range.
To refine your search, use the Filter panel to focus on specific parameters:
- Rated: displays only conversations that received user feedback.
- Claim of an agent: filters conversations that were taken over by a human operator.
- Client: shows conversations belonging to a specific identified user.
- Assigned to: allows filtering by the Agent responsible for the conversation.
- Language: limits results to conversations held in a particular language.
- Topic: filters by main subject of the conversation.
- Subtopic: refines the selection within a specific Topic , helping locate precise types of user requests.
Filters can be combined to narrow down results, for example: “English conversations about returns that were escalated to an agent and received a rating.”
Click Apply to refresh the list, or Reset to clear all filters and return to the full dataset.

Conversations table
The main table displays all archived conversations matching your selected criteria. Each row corresponds to one Conversation , providing both quantitative and contextual details.
| Column | Description |
|---|---|
| ID | The unique identifier assigned to each conversation, used for reference or export. |
| Start / End | Timestamps showing when the conversation began and ended. They help identify peak hours or long sessions. |
| Duration | Total time between the start and end of the conversation. Longer durations may indicate complex interactions or multiple exchanges. |
| Queries | The number of unique user questions detected in the conversation. Useful for understanding how many topics or intents were covered. |
| Messages | Total number of exchanged messages between the user and Smart Assistant (including both sides). This reflects interaction depth. |
| Rating | Represents the user’s satisfaction score, ranging from 1 to 5. Higher ratings indicate a more positive experience. If a red dot appears next to the score, it means the user has included an additional feedback comment along with their rating. |
| User | The name or alias of the person interacting with Smart Assistant. |
| User ID | The internal identifier assigned to that user within your system. |
| Escalated | Indicates whether the conversation was transferred to a human agent. |
| Assigned | Shows the agent or team responsible for handling the conversation, if applicable. |
| Language | The detected language of the conversation, useful for multilingual operations. |
| Location | Geographical origin inferred from user metadata. |
| Channel | The channel used for the conversation. |
| Device | The user’s device type, helping evaluate behavior per environment. |

The table supports sorting and pagination, allowing you to focus on the most relevant data.
Click any conversation ID to open its detailed view, where you can review the full message history, ratings, and associated metadata.
Notes and Recommendations
- Review specific conversations and mark them as Positive if the bot has responded satisfactorily to certain key questions.
- Do not mark all satisfactory conversations, but only those that have something special in terms of questions, answers, or user cases.
- This way, you can tailor the bot to your needs.