Actually we have in console a way to test a sentence. Nice.
Now, having different apps with lot of intents and tons of possible sentences, when modifying one intent, or adding another app, all can be mess, or just a few sentences. For example, I ever had a few sentences that triggered a wrong intent, so I modified them, it works but then, other sentence became messing intents.
So now, I keep a set of sentence that I always try once I’m ready to update the assistant on the pi, to be sure that I didn’t mess other things.
And here we are, testing around 30 sentences at the end to be sure all is working fine is … a pain !
So, could we have a page dedicated to batch testing, where we could import a set of sentences, even store it for later test, and test all sentences in a batch ?
Even better could be a table of:
this is my sentence ------ here is the intent it should trigger ----- here intent triggered after test (green/red)
Sincerely, this would save a lot of time, and provide an efficient global assistant testing.
PS: you note I didn’t asked to specify which slots should be defined with the intent… I keep for v2