Deploy your apps

We uses templates to transform snippets into code, you can find more technical information on github https://github.com/snipsco/snips-actions-templates

Deploy with Sam

If you already deployed your assistant with Sam through sam install assistant your apps are already deployed.

Alternatively, you can reinstall them with sam install actions

sam install actions only reinstall snippets already present in the assistant folder /usr/share/snips/assistant/snippets

If you wish to update the apps after editing them on the console, run

sam update-assistant

It will download the assistant and reinstall apps automatically.

To check the logs sam service log snips-skill-server

If you have Home Assistant snippets, Sam will search for an already existing installation and move the snippets to Hass configuration file in /home/homeassistant/.homeassistant/python_scripts

How it all works

How does Snips go from code snippets, git repository or Hass components into actual code running on your Pi? Here we explain step by step how it works, behind the scenes.

1. Download the assistant

The first step is to download the assistant and copy it in /usr/share/snips/assistant. Here is what is inside the /usr/share/snips folder:

/usr/share/snips/
|-- assistant
| |-- assistant.json
| |-- custom_asr ...
| |-- custom_dialogue ...
| |-- custom_hotword ...
| |-- dataset.json
| |-- snippets
| | `-- nebuto.Smart_Lights
| | |-- config.ini
| | `-- python2
| | |-- lightsSet.snippet
| | `-- lightsTurnOff.snippet
| |-- Snipsfile.yaml
| `-- trained_assistant.json
|-- dialogue ...
|-- hotword ...
|-- snips-actions-templates
| |-- homeassistant
| | |-- action_{{action_name_alt}}.py.tpl
| | |-- README.md
| | `-- spec.json
| `-- python2
| |-- action-{{action_name}}.py.tpl
| |-- README.md
| |-- requirements.txt
| |-- setup.sh
| `-- spec.json

Several things to notice here, snippets are present inside your assistant. The Snipsfile.yaml will contain the Git repository urls. Also the snips-actions-templates folder contains the templates that will be used to generate the actual code with snippets. Templates can be found on Snips's Github.

2a. Snippets code generation

On the Pi, we run:

snips-template render

It will search for existing snippets inside /usr/share/snips/assistant/snippets And use the templates in /usr/share/snips/snips-actions-templates to generate the code and put in /var/lib/snips/skills. The generated file will contain this:

/var/lib/snips/skills/nebuto.Smart_Lights
|-- action-lightsSet-nebuto.Smart_Lights.py
|-- action-lightsTurnOff-nebuto.Smart_Lights.py
|-- config.ini
|-- README.md
|-- requirements.txt
|-- setup.sh
|-- spec.json

Some things to note:

In the code snippets tutorial, we created a simple lightsSet and lightsTurnOff snippet:

lightsSet
lightsTurnOff
if len(intentMessage.slots.house_room) > 0:
house_room = intentMessage.slots.house_room.first().value # We extract the value from the slot "house_room"
result_sentence = "Turning on lights in : {}".format(str(house_room)) # The response that will be said out loud by the TTS engine.
else:
result_sentence = "Turning on lights"
current_session_id = intentMessage.session_id
hermes.publish_end_session(current_session_id, result_sentence)
if len(intentMessage.slots.house_room) > 0:
house_room = intentMessage.slots.house_room.first().value # We extract the value from the slot "house_room"
result_sentence = "Turning off lights in : {}".format(str(house_room)) # The response that will be said out loud by the TTS engine.
else:
result_sentence = "Turning off lights"
current_session_id = intentMessage.session_id
hermes.publish_end_session(current_session_id, result_sentence)

Once generated, they will look like this:

lightsSet
lightsTurnOff
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
import ConfigParser
from hermes_python.hermes import Hermes
from hermes_python.ontology import *
import io
CONFIGURATION_ENCODING_FORMAT = "utf-8"
CONFIG_INI = "config.ini"
class SnipsConfigParser(ConfigParser.SafeConfigParser):
def to_dict(self):
return {section : {option_name : option for option_name, option in self.items(section)} for section in self.sections()}
def read_configuration_file(configuration_file):
try:
with io.open(configuration_file, encoding=CONFIGURATION_ENCODING_FORMAT) as f:
conf_parser = SnipsConfigParser()
conf_parser.readfp(f)
return conf_parser.to_dict()
except (IOError, ConfigParser.Error) as e:
return dict()
def subscribe_intent_callback(hermes, intentMessage):
conf = read_configuration_file(CONFIG_INI)
action_wrapper(hermes, intentMessage, conf)
def action_wrapper(hermes, intentMessage, conf):
if len(intentMessage.slots.house_room) > 0:
house_room = intentMessage.slots.house_room.first().value # We extract the value from the slot "house_room"
result_sentence = "Turning on lights in : {}".format(str(house_room)) # The response that will be said out loud by the TTS engine.
else:
result_sentence = "Turning on lights"
current_session_id = intentMessage.session_id
hermes.publish_end_session(current_session_id, result_sentence)
if __name__ == "__main__":
with Hermes("localhost:1883") as h:
h.subscribe_intent("lightsSet", subscribe_intent_callback) \
.start()
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
import ConfigParser
from hermes_python.hermes import Hermes
from hermes_python.ontology import *
import io
CONFIGURATION_ENCODING_FORMAT = "utf-8"
CONFIG_INI = "config.ini"
class SnipsConfigParser(ConfigParser.SafeConfigParser):
def to_dict(self):
return {section : {option_name : option for option_name, option in self.items(section)} for section in self.sections()}
def read_configuration_file(configuration_file):
try:
with io.open(configuration_file, encoding=CONFIGURATION_ENCODING_FORMAT) as f:
conf_parser = SnipsConfigParser()
conf_parser.readfp(f)
return conf_parser.to_dict()
except (IOError, ConfigParser.Error) as e:
return dict()
def subscribe_intent_callback(hermes, intentMessage):
conf = read_configuration_file(CONFIG_INI)
action_wrapper(hermes, intentMessage, conf)
def action_wrapper(hermes, intentMessage, conf):
if len(intentMessage.slots.house_room) > 0:
house_room = intentMessage.slots.house_room.first().value # We extract the value from the slot "house_room"
result_sentence = "Turning off lights in : {}".format(str(house_room)) # The response that will be said out loud by the TTS engine.
else:
result_sentence = "Turning off lights"
current_session_id = intentMessage.session_id
hermes.publish_end_session(current_session_id, result_sentence)
if __name__ == "__main__":
with Hermes("localhost:1883") as h:
h.subscribe_intent("lightsTurnOff", subscribe_intent_callback) \
.start()

Those files can be a good starting point for your own action code.

2b. Home Assistant

For Home Assistant, we copy the code snippets directly inside the default Home Assistant folder /home/homeassistant/.homeassistant/python_scripts

We explain the rest in the manual deployment more deeply

2c. Git repository

For a Github repository, we simple read Snipsfile.yml and git checkout into /var/lib/snips/skills

The repository is supposed to follow the Action specifications guidelines to be able to run out-of-the box.

3. Generating virtual environment

Each action can have a setup.sh file that will create a virtualenv where we will install all the dependencies. It prevents any issues between different actions sharing dependencies. We run the setup.sh if possible.

4. Relaunch the platform

We relaunch the whole platform to take into account the new assistant and also the code snippets.

sudo systemctl restart 'snips-*'

5. Finished

If everything worked fine, you can now see the action logs in the logs of snips-skill-server

journalctl -f -u snips-skill-server

Deploy manually (without SAM)

Prerequisite

  • We need two more programs from the snips-platform: snips-template which turns the code snippets associated to apps into fully functional pieces of code, and snips-skill-server that will run those pieces of code.

$ sudo apt-get update
$ sudo apt-get install -y snips-template snips-skill-server
  • The code snippets are written in Python, we thus need to install a fully-fledged python environment

$ sudo apt-get install -y python-pip
$ sudo pip install virtualenv

Add current user to the group snips-skills-admin (make sure you are not root, so $USER is your normal user)

$ sudo usermod -a -G snips-skills-admin $USER

Now, this is an important step. To make sure that your user was correctly added to the snips-skills-admin group, you have to LOG OUT OF YOUR RASPBERRY

To prevent unwanted effects from apps on your system, the user_snips-skills running the apps has limited privileges. If you want to add more privileges, you need to add the ___snips-skills user to the groups you want access to. For instance, if you want to use the GPIOs on your Raspberry, run

sudo usermod -a -G spi,gpio _snips-skills

Code snippets & Github repository

  • Log in again in your Raspberry, and generate apps from the snippets of code with the following command:

$ snips-template render

This step is only required if your assistant contains code snippets to render into python scripts. If your code is hosted on a git repository, clone your skill in /var/lib/snips/skills and proceed to the skill setup.

  • Generate python virtual environment for each app

$ cd /var/lib/snips/skills

cd in each each app's folder and run:

$ ./setup.sh

Depending on the app, you might want to add parameters in the config.ini file.

  • Restart snips-skill-server to launch the apps.

$ sudo systemctl restart snips-skill-server
  • Now test your Voice Assistant. In a new tab, connect to your device, and run:

$ snips-watch -vvv

Plug a speaker to your Pi. You’ll need it to hear the Text-To-Speech response.

The default wakeword is “hey Snips!”. The wakeword is what allows triggering the device to have it listen to the command.

Say “hey Snips!” and then “Switch on the lights in the garage” You should hear “Turning on the lights in : garage” on your speaker.

If you notice some slowness when you speak, it might be because the power supply of the Raspberry Pi is too weak, in which case it impacts speed. Make sure you have enough power.

Home Assistant snippets

The snippets are meant to be run by Hass as a python script and not snips' snips-skill-server. Once your assistant is downloaded snippets should be generated and stored in /var/lib/snips/skills

If it's not the case, you can find the Hass snippets in /usr/share/snips/assistant/snippets/homeassistantYou can either manually change each snippet extension to .python, or use snips-template render

Once it's done, here are the next steps you need to do for these python scripts to work with Hass :

  • move python files in the Hass python_scripts folder. Scripts name must be lowercased and can't be in subfolders

  • add snips: component to Hass configuration, see https://www.home-assistant.io/components/snips/

  • add python_script: component

  • add mqtt: broker & port that Snips uses

You'll then have to edit Hass' configuration.yaml to point to the scripts that were added. For instance, if you use Snips' Smart lights bundle, this is what your configuration.yaml will look like:

mqtt:
broker: 127.0.0.1
port: 1883
python_script:
snips:
intent_script:
lightsTurnOnSet:
action:
- service: python_script.action_lightsTurnOnSet_Smart_lights
data_template:
house_room: "{{ house_room }}"
number: "{{number}}"
unit: "{{unit}}"
lightsTurnOff:
action:
- service: python_script.action_lightsTurnOff_Smart_lights
data_template:
house_room: "{{house_room}}"

The data_template section is exposing Snips' intents slot_value to the python script. Therefore you'll be able to get slot values in the python script: house_room = data.get('house_room')

Once everything is ready, you can relaunch Hass.

To check if you configured Hass correctly, you can display Hass' logs:journalctl -f -u home-assistant@homeassistant.service

Home Assistant will tell you if the configuration.yaml file doesn't contain any error (you can also check this on the Hass' web interface). It will also tell you if there is something wrong in your python scripts.

Once Hass has correctly recognized your scripts, you can fully test your Voice Assistant

In a new tab, connect to your device, and run:

$ snips-watch -vvv

The default wakeword is “hey Snips!”. The wakeword is what allows triggering the device to have it listen to the command.

Say “hey Snips!” and then “Switch on the lights in the garage” You should hear “Turning on the lights in : garage” on your speaker.

If you notice some slowness when you speak, it might be because the power supply of the Raspberry Pi is too weak, in which case it impacts speed. Make sure you have enough power.