Snips platform includes support for conversations with back and forth communication between the Dialogue Manager and the client code. This conversation is wrapped into the same session. The client decides when the session should continue or end according to the intent(s) it received from the platform. Figure 3 shows an example of conversation about booking a flight:
The user invokes the platform with the hotword, and starts to express its intention to book a flight
The client code gets the intent and needs to disambiguate the starting airport of the flight, thus it will ask the Dialogue Manager to continue the session, and provide the question to ask to the user, with the kind of answer it expects (the name of the intent(s) to use to analyze the user answer). In our case it is an intent that knows about airport names.
Without having to invoke the platform again with the hotword, the user gave his answer and the client code can verify the airport name is part of the expected ones (Roissy or Orly). It can follow up and ask for more informations like time, date etc.
Once the client code has all the necessary information, it can end the session with the
endSession message, with an optional feedback text message.
The purpose of this app is to help you work out your times tables, below we've shown a typical dialogue session for this dialogue with your assistant as well as the API calls that the developer has to make to create the dialogue. Example of dialogue with the voice assistant:
User: "Hey Snips" User: "Start the lesson with 4 questions on table 9" Assistant: "What's 5 x 9?" User: "Hmm the answer is 45 isn't it?" Assistant: "That's it, well done ! What's 4 x 9 ?" User: "I think it's 33" Assistant: "Oh no, wrong answer, 4 x 9 = 36. What's 3 x 9 ?" User: "I don't know" Assistant: "That's okay don't worry, the answer was 27. What's 6 x 9 ?" User: "I don't want to play anymore" Assistant: "Ok, the game is over." To create this interaction you need to create an assistant in the console and an action code that will manage the dialogue interactions.
You can define an assistant on your snips console (console) by creating the different intents yourself or you can use the official snips app made for this purpose. You will need 4 intents:
startQuiz with the following slots
giveAnswer with the following slot
giveUp without slot.
stopGame without slot.
Once you've finish your voice assistant you have to implement the answers of the assistant, along with the dialogue interactions.
Below, you can see the same dialogue along with the query that your action code has to send to create the dialog:
Start the lesson with 4 questions on table 9.
What's 5 x 9?
Hmm the answer is 45 isn't it?
That's it well done! What's 4 x 9?
I think it's 33.
Oh no, wrong answer, 4 x 9 is equal to 36... What's 3 x 9 ?
I don't know.
That's okay don't worry, the answer was 27 What's 6 x 9 ?
I don't want to play anymore.
Ok, the game is over.
Basically every time the Snips assistant detect one of the 4 intents it will be sent to your action code. Your action code will then be able to react accordingly and for instance ask you questions about the times tables. In the dialogue above you can see the detailed MQTT query that has to be sent by your code (every time it detect an intent) to enable the expected interactions. Now let's see how to subscribe & publish from your Python code using hermes-python :
import hermes-python :
--: coding utf-8 --
from hermes_python.hermes import Hermes
store the names of your intents as global variables:
INTENT_START_QUIZ = "start_lesson"
INTENT_ANSWER = "give_answer"
INTENT_INTERRUPT = "interrupt"
INTENT_DOES_NOT_KNOW = "does_not_know"
register an intent :
with Hermes(MQTT_ADDR) as h:
user_request_quiz it will be called every time the intent
INTENT_START_QUIZ is detected :
def user_request_quiz(hermes, intent_message):
sentence = "What is 5 times 9 ?"
print("User is asking for a quiz")
hermes.publish_continue_session(intent_message.session_id, sentence, [
The code show you the basis to subscribe to intents and respond with a sentence that the assistant will pronounce and an intent filter to focus the voice recognition on a sub-set of the intents. Now obviously you will need to make the interaction dynamique and add a randomisation of the questions asked as well as taking into account the slot values of the intents. Though, you should have all the pieces to get started on a dialogue interaction. You can also have a look at the code on our Github : here .
Again you also add our prebuilt times-tables-quiz app to your assistant. ==> console.snips.ai