If you want to build an assistant today in languages other than the ones available with Snips, it is possible to rely on cloud services like Google's Cloud Speech service. Google's service has been integrated into our platform so you can use any of the languages it provides.
You will simply need to select it when configuring your assistant on the web console, and include your Google Cloud Speech credentials on your device.
Note that if you use Google’s API, you will need to pay for it after a certain point, your assistant will not work offline, and it will not follow the principles of privacy by design. We are working hard to extend our support for on-device ASR to other languages as soon as possible 😉.
In order to provide assistants that are light and robust, the Snips console trains a custom Language Model for each assistant's ASR. This model is limited to the vocabulary appearing in the assistant, and some built-in entities (numbers, dates, etc).
This gets in the way of applications that require to understand large vocabulary. General knowledge questions, for example: "What's the distance between the earth and the moon?", "Who invented the television?", etc. Robust and embedded large vocabulary ASR's are beyond the current state of the art, but if you are willing to trade robustness for generality, Snips provides you with an experimental large vocabulary ASR.
First install the packages. Be aware that it takes about 500MB once installed, 160MB to download, and about 700MB to get setup.
sudo apt-get update; sudo apt-get install snips-asr-model-en-500mb
Next you need to override the assistant model by the generic model: in
/etc/snips.toml, go the the
snips-asr section to add:
model = "/usr/share/snips/snips-asr-model-en-500MB"
Finally restart the asr deamon:
sudo systemctl restart snips-asr