Sonos Announces Acquisition of Snips: Is this the end of Snips for makers?

@JGKK I’d love to see an example of your setup through a blog post or gist of how to set it all up. Do you have anything?

exactly my thoughts… very low…
really Snips has always been a beta product with poor support from the developers. releases were always buggy, and support was community helping the community

i use Picovoice for wake word, and rasa for nlu… guess google asr for now till something better comes along

For my part, I’ve been using Snips for 1 year for a personal project on Raspberry, and I’ve rarely been confronted with bugs and the versions seem pretty stable to me.
I will continue to use SNIPS with interest if a community version without significant limitations is released.

Was it ever open source? If so, where to find it’s source?

@ozie I’m new to Picovoice and Rasa NLU. Any good blog posts or tutorials you’ve seen out there to get up and running?

Also another question: what are people using to run actions? Tried Home Assistant but it seems super complex, and I’ve been coding and working on Linux command line work for 10+ years now. The basic setup wasn’t complex, really just trying to create an integration to use my Tuya light bulbs locally, instead of through Tuya’s cloud service, was the complex part I’ve been trying to figure out forever.

With Snips I had figured out how to listen to the intent using hermes-javascript and then I used code to make perform actions depending on the slots it detected and the intent. I’d love to be able to do something similar. Sounds like maybe Rhasspy can use that same code?

Just wanting to explore my options. Thanks!

I’m really intrigued by Almond, but it’s not super obvious how to get it up and running and train it. I’m particularly interested in part where it can detect multiple intents at once and perform actions like "Monitor New York Times and slack me when there's an update".

FYI: https://speechbrain.github.io/ … this is a much better scenario with some great collaborators and the first deliverable’s are expected on or about June 2020.

So I’ve tried a few different alternatives and I have to say nothing works “out of the box” like snips does. I’m really disappointed Snips is taking this path. At this point I’m going to suffer through another open, offline platform like Rhasspy and see what I can cobble together.

Rhasspy and Home Assistant here.
Created automations responding to the events Rhasspy generates, work well for me and good out of the box.
I see a lot of people trying Snips and the first parts works well. But then they get stuck with Skills and such.
The same is probably for every assistant except the google/apple/amazon stuff.
Initial setup is relatively easy, but then what?

I think I will try and create a guide for using Rhasspy and Home Assistant if not already existing.

2 Likes

I’ve tested Rhasspy and it works pretty well.

Great Job @Romkabouter and synesthesiam !

The default ASR (using pocketsphinx in french) need more initial configuration than Snips (obviously…) but works sometimes better (for the few test cases I implemented).

The default NLU (using OpenFST) works also quite well for simple cases.

The only thing really missing to easily transition a Snips integration to Rhasspy is the built-in slot types (datetime, number, temperature, percentage, etc.). Some of these can be recreated using JSGF grammar but it is really tedious. Maybe it can be achieved directly using something like python-duckling?..

Alternatively, using Snips NLU might alleviate this but it require some coding to integrate it with Rhasspy (using custom training commands and remote HTTP NLU).

https://duckling.wit.ai

I tried duckling and it is really fast but for now it does not support multi level durations (ex: 2 hours and 35 minutes) and the logic to associate the extracted value with the intent slot need to be handled manually.

I built and tested snips-nlu-rs and although it depends on a lot of Snips Github repositories, it is working with offline trained NLU engine (through the use of Snips NLU python library).

Integrating the Snips NLU engine in Rhasspy for example should allow lots of Snips makers to migrate without changing their intent handling system.

It is a short term solution though as the NLU engine may soon be discontinued by Snips/Sonos.

I’ll keep searching for a better way to handle NLU using “community libraries” :slight_smile:

It is a short term solution though as the NLU engine may soon be discontinued by Snips/Sonos

I don’t think we’re already here. For now, the NLU library works pretty great and is open-source since a long time… I personaly rely on it for my tiny assistant (Pytlas - An open-source python library to build your own assistant) but can migrate for another since it has been abstracted into an Interpreter class.

What kind of hardware are you using?
Does it support Matrix Voice Standard?

Yes, without the esp32 chip you can still attach it to a Pi and use it as a Microphone

Unfortunately, it’s the end of snips for Maker:

Is snips-tts also open-sourced? Because for languages other then english, most free/non-cloud TTS solutions sound plain crappy.

At least in German they just use Pico tts and nothing of their own. You can download and install pico2wave pretty easily.
Johannes

1 Like

Ah, didn’t know that. Might be worth checking out.

When looking at alternatives, like Rhasspy, how would one realize the concept of snips satellites? I like the idea of having one base, and multiple satellites across the rooms.

Does Rhasspy support this in some way, manually or out of the box?

Disclaimer: I don’t actually use Rhasspy but I read the docu and had a little play.
I think they also support audio input over Mqtt using https://pypi.org/project/hermes-audio-server/ which is a project based on one of the few components snips actually made opensource. It should be possible to do something with this.