Below you will learn how to deal with advanced concepts when working with machine learning intents such as overlapping intents or multi-step intent detection.
To illustrate concepts we introduce the example of a location mapping app. Consider an intent to help users find a parking spot. Examples sentences could include:
Where is the next car park
I'm looking for parking spot
It is important to understand the nature of side effects in the intent mapping algorithm.
The machine solves a Classification problem over the example sentences. The example sentences are used as input data where each sentence is labeled with its intent. From this we build a machine learning model that is able to predict the intent of any new input sentence.
Prediction results differ depending on what data you choose to remove or add to the model.
Maybe you have noticed prediction scores going slightly down the more intents you add to your model? For example, if you combine intents from two flows using Flow attachment intent scores will be slightly lower. This is because new intents will decrease the likelihood of any individual intent when scoring an input sentence. The prediction probability is calculated based only on the model's internal data. Everything depends on everything else. This knowledge may help you understand and troubleshoot issues better.
What if we wanted to accommodate both an intent to find parking spots for cars as well an intent to help outdoor lovers find the next public park nearby? We want Cognigy to distinguish the two intents CarPark and PublicPark:
- I'm looking for parking spot (triggers CarPark)
- Where can I find a park nearby (triggers PublicPark)
*Why are overlapping intents challenging for the machine?
- Just as with human conversation, ambiguity in conversation poses a stumbling block and we regularly have to ask what the other person means precisely. Conversational AIs are no different. As designers we have to be careful to anticipate this behavior beforehand.
Fortunately, Cognigy offers rich toolbox of solutions to deal with such overlapping intents.
Here are several strategies to approach this problem before it arises:
Often it is sufficient simply to carefully design similar Intents with well crafted example sentences. For this, it is important to clearly work out the difference between intents that a machine is able to differentiate.
In our example, we can have parking intents co-exist by populating example sentences accordingly. We provide sufficient example sentences, bring in variation and add the few problematic sentences we found as new example sentences to Intent Trainer.
In our example:
- we take care to associate words like car, drive, spot, parking lot, free, garage etc. with the Car Park intent
- we take care to associate words like green, public, walk etc. with Public Park intent
Cleaning up poorly crafted intents and correcting mistakes with the intent trainer usually leads to results very quickly. The machine will be able to distinguish parking intents based on additional information from other words in the input sentence.
You can use multi-step intent detection to deal with overlapping intents or separate your intents into smaller, more manageable models.
Now imagine we would also have to distinguish intents to find safari parks, nature parks, theme parks, technology parks etc. separately from car and public parks. Sometimes a more effective or surefire way to deal with hard intent disambiguation issues is a multi-step approach. Your options in Cognigy are almost limitless and can deal with almost any complex patterns you might encounter.
To deal with parks effectively, for example, we created a Lexicon with a general park tag as well as tags carPark and publicPark associated with the finer grained intents we're interested in. All keyphrases share the general park tag in addition to optional specific tag pointing to a subclass.
Now, we can first recognize a query for parks in a general catch-all intent for parks:
When we recognize the intent in our main flow, we execute a flow with finer grained intents:
In the ParkHandler flow we can handle the various variations of parks with fine grained intents that handle the appropriate final response for the intent or handle a fallback response to disambiguate the user input.
This pattern works very well when the general intent is able to recognize all specific input queries.
Also consider a multi-step approach to deal with large number of intents. As intent training time increases for large models, you can split, for example, a large knowledge base, into topics that lead to separately trained topic specific Flows and associated intent models.
You may also use ML and Rule Intents in parallel to capture the same conceptual intent of your users.
To use multiple Cognigy Intents in parallel you can use Think nodes.
The idea is to use a Think node to trigger a final intent where the appropriate response and conversation logic resides to handle the end user's intent.
Here we created a Rule Intents CarParkRuleIntent to trigger the intent to find a car park based on keyphrases. If we trigger it we Think an example sentence of the ML Intent CarParkMLIntent which will then be handled with the appropriate reply:
In Cognigy currently intent mapping is done per Flow, so an intent model is per Flow. All intents that are in the parent flow and all intents in Attached Flows are trained together in a single model.
Flows used in Switch Flow or Execute Flow nodes are separate Flows. Their intents are trained in a separate model. The depth of resolution is only one level. That is, intents from attached flows of attached flows will never be used during intent training.
For intents that are trained together, the machine will attempt to distinguish them from one another when scoring a new input sentence. For intents that are trained separately, the machine will generally do a poorer job in distinguishing an intent from another intent outside of the model.
Just as a human when faced with ambiguity, it is a good idea to ask and clarify what the other side wants. For example, we could simply respond with a quick reply to effectively resolve the user query:
Machine learning models work best with comparable amount of information on all intent classes. That is, ideally all intents have a similar amount of example sentence and are clearly separable in terms of content. While it is able to deal with imperfect input, it always helps if you make the job for the machine easier.
Make sure you do not have intents that are only a single word or sentence without useful information. Such intents may reduce the overall efficacy of the model.
Out-of-scope sentences are where input should be out of scope for the Conversation and you do not want to trigger any intent.
Coming back to our parking example, imagine we also had users looking for green space to take a breath of fresh air instead of a dirty parking lot.
Where can I find a park nearby
Say your app only deals with drivers and you simply want to ignore such requests. In this case, you want to add the offending sentences to the Reject Intent.
As a best practice, simply treat the Reject Intent to capture utterances your bot should ignore with the same importance as any other intent essential to the functioning of your bot.
Alternatively, you may want to reconsider the design of your bot. If you encounter a class of out-of-scope utterances frequently in your logs then you likely want to add an additional intent. You can address the expressed user intent even if it only clarify the scope of your bot.
If you require more advanced out-of-scope recognition you can also use a Rule or ML intent to capture out-of-scope sentences. This can be done if you ignore the intent otherwise in your Flow.
You will have to be mindful of your architecture when using Rule and ML intents to catch out-of-scope sentences, however, when using Attached Flows: an attached Flow is executed if and only if one of its intents is triggered - to avoid triggering false positives in other Flows you need to put the Flow that captures the out-of-scope intents at the top of the attached Flow ordering and possibly enable the 'Map global intents first' setting.
Finally, see below on how to configure your thresholds optimally to avoid intent mismatches on out-of-scope sentences. Generally, the lower your threshold, the more likely you are to encounter false positive matches on out-of-scope utterances.
The default thresholds will work great in most situations. Different threshold levels may be optimal, however, depending on your use case, the design and state of development of your intents.
In short, the intent thresholds balance precision and recall.
The lower the threshold, the higher the recall. That is, you will not fail to recognize an intent if it should be triggered.
The higher the threshold, the higher your precision. That is, you will not have many false positives.
To achieve optimal performance, the better maintained and designed your intents are - the lower can you go with your thresholds while maintaining high precision and recall on new unknown test input sentences.
Moreover, depending on your conversation design, you may have sensitive intents that should only be triggered with high certainty.
With Reconfirmation questions you can further modulate the conversation flow by triggering a reconfirmation question if the intent prediction score falls in between the reconfirmation and confirmation threshold.
For intents with very few example sentences or sensitive topics, we recommend confirmation thresholds as high as a 0.7 or 0.8.
With well designed intents and balanced conversation Flow, you may well achieve close to perfect precision and recall with hundreds of intents at thresholds as low as 0.2.
With attached Flows or Execute Flow and Switch Flow nodes you can even group your intents into different thresholds. You can even set different thresholds per intent, depending on your state etc. using CogniyScript. You have access the current intent score in CognigyScript with
For example, a common pattern is to have popular and important intents and Flow Logic in a deeply developed main flow with low thresholds. You can put gimmicky and rare small talk intent logic in a separate flow with high or even exact match threshold of 1. Note, Cognigy smalltalk Flows have a threshold of 1.
The following factors affect training time
- The more example sentences you have the longer training time. Cognigy is laid out for a reasonable number of intents expected within conversational AI environment of a single Flow.
- Imbalanced, poorly delineated and designed intents may affect training time and efficacy of the model.
- If you require more than many hundreds of intents you might be dealing with a conversational search problem instead of a intent classification problem. Conversational search is appropriate for large knowledge bases and FAQs and would be approached with more efficient search technologies. Contact our support staff to learn how our customers realize cutting edge conversational search solutions using our API & DB Nodes or Code Nodes nodes.
- Attached Lexicons affect training time with Synonyms and Tags used in example sentences.
- Consider training and handling intents separately via Execute Flow and Switch Flow. You can use this approach directly for intents and Flow Logic without side effects such as small talk. If you have overlapping intents you want to consider a multi-step intent mapping approach.
- Consider Rule Intents and Keyphrases which are generally easier to maintain and much faster and lightweight computationally.
- Server hardware capacity and congestion on your Cognigy installation may affect training times, see guidance on timeout error below. CPU-clock speed of hardware as well as I/O speed and latency affects individual training job completion time while horizontal scaling and the number of NLP-services and dedicated compute resources determine total training throughput of a Cognigy installation.
- NLU 2.1 versions and later use advanced caching which affects training times as follows:
- Repeated training iterations on intent collections with limited changes may complete faster
- Changes in server side cache may increase training times, i.e., immediately after maintenance windows with server restarts or updates you may see increases in training times because the cache was emptied.
The error means that the training job timed out and could not be completed because the server has insufficient capacity. This means the training job was either too large or the server was too busy with other training jobs and computations.
Please try training again later, if the problem persists contact your system administrator.
The table below lists guidance on size of intent collections that can be expected to be trained reasonably. Note when using NLU 2.1 intelligent caching means repeated training with caching may be required to complete training for very large intent collections.
|NLU Version||Example sentence limit per Flow guidance|
|Cutting Edge||~ 5.000 - 10.000*|
* Repeated training with caching may be required for large intent collections.
Updated over 2 years ago