I think the reason why the Siglent is not showing a standard deviation (or showing 0 pV) is because of the low-noise front end, hence there is effectively no noise in the digitized signal and all measurements are placed in exact same ADC value/code.
I have tested it and as can guess it is as you told.
I repeat with signal around same kind as in Dave's video where he try and laugh / wodering like Liza in wonderland what Siglent display (SD). Even when he see 4.12/4.12/4.12 he still laugh this 0V SD
(but it is wrong when it inhibit measuring min and max and due to it, also mean, it need also show *** as SD. It is some kind of minibug)
Ok I drive roughly 4Vpp 1kHz sine and setting scope like Dave for 1.4Mpts memory for reduce samplerate and 1ms/div.
After then I adjust signal generator carefully so that peaks max samples level stay in same ADC step over every single acquisitions. Min 4.12V Max 4.12V and of course average 4.12V and of course Std 0.
I run it many times because history buffer can keep only last 19 acquisitions with this setup.
After then I zoom deeply in to sine top and/or bottom and check every 19 acquisition. There is not any exception in max value. Both, sinewave top and bottom, give just 2 ADC steps (noise) but all peak values stay in same ADC level and not random one (or more) ADC step overshoots.
I did this check many times. Always when signal stay stable and just in "golden" position/level it keep this "miracle". Of course, math is math. I hope simple math use same rules also in Australia (sarc.)
If I adjust signal tiny bit higher or lover then it start generate more random variations in peak value and it start of course show also SD. I can easy find these "golden" levels when I fine adjust signal generator level. It works and calculate just right least in this particular check what I did. Even in case it show 0 Std in my test (and I have checked it, if it show anything different then there is error in math (with this data what I look in my test. Of course I can also adjust signal so that it continuously give other than 0 Std. Only need change signal just tiny bit. And again, it calculate right.)
Then Dave wonder 2 digit resolution. 10V full range and 8bit ADC and electronic EE wonder 2 decimal
Just like adjusting scopes, first need use brain and after then muscle.
Btw, some times it can tell even some femto or even less volt SD also in cases when there is bigger difference in max and min but we run it extremely long time and max-min diff is just from some extremely rare single shot. (I leave this puzzle here and look if we can laugh later.). Who understand this, he do not laugh. But who have perhaps lack of enough real experience and knowledge may roll they eyes and talk something ridiculous. Just need do bit homeworks.
Attached image. No signal. Terminated 50ohm. Reduced mem fore get more acquisitions to history buffer.
Every sequential acquisition every ADC sample just in same ADC step. Now if we add here very low noise signal and level is just so that it match optimally these ADC levels result can be "amazing" until user understand what is going on.
Because lack of enough noise.
This is common problem also if we try get more resolution using example averaging. We need noise. For this kind of purpose some advanced instruments need add noise for get better accuracy/resolution. This same may happen in time axis (example in some advanced frequency calculators) and level axis.