I'd frequently be probing points with both hands and had to carefully twist to press the Single or Stop button, and wished I could just yell at the scope to pause the capture. Keysight does offer voice control for its oscilloscopes — starting on the 6000 series. For those of us on a more modest budget, I found a way to build DIY voice control for less than $50. It works on the DSOX1102G, and probably the new 4-channel version too (crossing my fingers for Wave next month
).
Update (Mar 25): I've now extended it to work with Rigol scopes, as well as the full family of Keysight 1000-6000 scopes.
To build the solution, I started with a Raspberry Pi model 3A+, which has just enough ports to connect to the scope. The model 3A+ is smaller and $10 cheaper than the model 3B, and foregoes several USB ports and wired Ethernet. For a microphone, I used the Respeaker 2 hat that fits nicely in the Adafruit case. For the voice platform, I used Snips.ai. I learned about Snips at a Maker Faire a couple years ago and I really like their platform since it's partially open-source and free for individual use, it can run on the Pi, and it runs everything completely on-device — there's no network connection or cloud BS involved! Total cost: $25 (RPi 3A+) + $7 (Adafruit case) + $11 (microphone hat) = $44.
Here's a 1-min video of me probing the FET turn-off in a power supply I built for Nixie tubes. The current sense node is low-level and next to an inductor, so there's no way to obtain good signal integrity without carefully holding the probe and using a ground spring. With my hands full, the voice control really helps when I need to stop the scope or change the view. The Pi is in the lower right corner, and the LEDs blink when the speech processing is active. "Hey Snips" is the wake word to activate speech processing. There is a pause required after speaking the wake word, and between commands, so it's not as snappy as I'd like, but hopefully the Snips developers will improve this over time.
The Snips platform does speech recognition and natural language processing to interpret spoken commands into "intents" such as "show channel". I wrote a program that waits for these intent messages, and then sends VISA commands over USB.
While I originally wanted voice control just to hit the Single button when my hands were occupied, there are other places where I found voice control could help me work faster. For example, changing the trigger source channel means pressing a handful of buttons on the scope, where I can now command it, "trigger on channel 2". I ended up implementing voice control for most of the "core" commands in Keysight's programming guide. The assistant can control these functions:
- Run/Stop/Single
- Show/Hide channel (1/2 + math, references, and the external channel)
- Vertical/horizontal scale adjustments
- Add and clear measurements (duty cycle, rise/fall time, pre/overshoot, +/- pulse width, frequency, period, amplitude/average/min/max/base/top/P-P voltage)
- Trigger source and slope
- Saving screen captures to a USB drive
The code for my assistant is on Github:
https://github.com/jmwilson/ollieThe Snips app with the above functions and intents is here:
https://console.snips.ai/store/en/skill_E3eq8QB0Ae - it can be bundled into your own assistant or you can use the trained model in the Github repository.
Edit: ready-to-use Raspbian images are now available, linked from the Github readme:
https://github.com/jmwilson/ollie#getting-startedI'd like to customize Raspbian's pi-gen system to make ready-to-use images for the Pi, but am blocked at the moment on generating Debian packages for the Respeaker's drivers. For now, the set up requires installing Raspbian lite, installing the Respeaker drivers according to Seeed studio's directions, installing the Snips platform with the trained assistant model, and then running the ollie service. Happy to walk others through the directions for making their own.