Interesting, because if the waveform display area is in memory shared by the SoC and the FPGA, and uses one of the RK3399's built-in Mali T860 MP4 OpenGL ES GPU's supported pixmap formats, it would be trivial for the GPU to compose the waveform area to any supported display resolution (as it seems to be doing right now). They could also be using the shared memory area for the waveform data points only, with the GPU doing the rendering of the dots/lines (but probably not the sinc interpolation), but I don't think so.
Funnily enough, I do have an Android TV gadget using the very same RK3399 SoC. The
Mali T860 MP4 GPU is a pretty powerful beast for such a low-power SoC. The SoC also has a powerful video decoder and encoder that can decode 10-bit H.264 at 2160p@60fps. (I consider RockChip SoCs a good choice, because Rockchip pushes the hardware-level support to vanilla Linux/Android kernels, even employing kernel developers themselves.)
Another option would have been to treat the waveform display as a camera, and use the 6 Gbit/s MIPI CSI to stream the waveform display at a suitable update rate (possibly rotated to suit the intepolation/generation better). If the displayed waveform data is stored in buffers, then a rather simple processor can generate the display one column at a time quite efficiently, somewhat similar to how first voxel-based games generated their continuous terrains.
I suspect the Zynq generates the waveform display area here.
Note that if the Zynq is fast enough to generate a much larger display area, the Mali T860 MP4 would easily be able to scale it, pixel-perfect (meaning no nearest-neighbour nonsense, I'm talking cubic interpolation scaling here, as used as the medium-high scaling method in image processing programs like Gimp and Photoshop; with FFT/DCT-based scaling being the highest quality method only occasionally used).
It would also allow all the vertical scaling to be done in the UI, simplifying the FPGA quite a bit, at the expense of much more RAM accesses.
(Each of the four channels should be their own display buffer, for maximum flexibility, though. So quite a lot of RAM to fill by the FPGA.)
I am very positively surprised at the direction these oscilloscopes are taking, because I have wondered quite a while now why they don't do exactly this.
There is a lot of room for improvement, and not being in the development of these things, I don't know exactly what the bottlenecks are (I suspect RAM access from the FPGA is a major one); but I do suspect only relatively small hardware changes/upgrades are needed to fully exploit the 12-bit ADC.
At some point, I'd
love to see a lower-end 12-bit scope with no built-in display at all, designed to work with full-HD or better large displays, with a custom control board using standard USB HID connected to the main unit with only the analog and digital inputs (with touch controls only optional, not requited), and USB, Ethernet, HDMI/DisplayPort connectors.
I don't know about you, but that kind of thing – combined with a HDMI/DisplayPort switcher – would definitely suit my workspace and workflow better.
The DHO800/900 series shows it is definitely possible already. Mmm, modular scopes...
I do believe I will be getting a DHO804 or DHO814 myself, when they become available.