-
Notifications
You must be signed in to change notification settings - Fork 511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Intents] Handling question marks #629
Comments
Hey @lucasvercelot , In your example with the I guess the confusion comes from the probability gap for sentences that are very close. You can disable the first parser by removing it from the To have a better understanding of the whole pipeline, you can have a look at the blogpost we published when we open sourced the library. I hope this helps :) |
Hi @adrienball , I already read this article, which is great btw, a few months ago, and I completely forgot that the NLUEngine has two parsers, my bad ! :) I'll try to disable the first parser and see what I got ! Thanks a lot, I'll come back to you, to let you know. |
Ok, so I tried without the
I was expecting a higher score. I don't know if it's best to create entities if I want to train some intents with juste words or small groups of words (2-3-4 words sentences). I noticed that event for bigger sentences the probability score is pretty low, lower than when both parsers are enabled. |
The probability of the You shouldn't have to use entities if you don't need to, but that can be faster indeed to generate entity values with the intent vocabulary (if you know it exhaustively), instead of generating multiple sentences trying to cover all the vocabulary. So I guess that's something to try. Intent classification relies on characteristic words so you can also try to add more sentences and have them contain some domain specific words. |
Alright, thank you very much for all these informations and for your help @adrienball ! 😄 |
Hi guys !
I'm running into a small problem today, with the question mark '?' character.
I noticed that when I fit my model's intent with a sentence composed of a question mark (at the end of the sentence), when I input the sentence with and without the question mark, I get different results and important probability score diff.
When trained with question mark :
When trained without question mark :
When trained with both sentences 'es-tu réel' and 'es-tu réel ?' :
Do you have any idea why it's doing this ? Is it normal ?
Thanks in advance ! :)
The text was updated successfully, but these errors were encountered: