If the scope has a 200 MHz front-end and the sampling rate is set to 10 kSa/s like in the example, it is not mathematically correct (unless the user makes sure the signal itself is < 5kHz BW or it does decimation). Keysight scopes e.g. switch to linear interpolation in such a scenario which I think is a sane default. At the very least it gives you an immediate visual cue that you are sub-sampling. I guess the only mathematically correct option would be dot mode?
EDIT: I am not sure if the Siglent behaviour is a Lecroy-ism or if this is actually more widespread. It seems strange to me.
On first image, scope is set for 500 us/div (7 ms total on screen) and 2kS/s. That is only 14 points on the screen, although scope shows memory setting of 14 kpoints.
On other images timebase and memory depth is show the same but sample rate is different?
How was variable sample rate with fixed memory length and same timebase achieved? SDS1104X-E has manual sample rate setting? I don't have it, so I'm asking for clarification and manual is not saying...
Sample screen shown are certainly not how scope would behave by default.
Scope is by default trying to keep sample rate as high as it can. Also that is smallest memory settings, only 14kpoints of 14Mpoints available, so 1000 times worse than possible.
Also it has both linear and sin(X)/X interpolation, and point mode that does no interpolation whatsoever. And would probably show this signal nicely.
I can demonstrate wrong signals with any scope if I deliberately set it wrong...
I understand that post was meant to demonstrate sampling, but this is not scope normal behaviour. Robert had to specifically set it to show these effects.