The 800 us period of the 1kHz test signal is... unsettling.
Me? I wouldn't be too unsettled by something that took five years for anybody to even notice it.
Well, the point in my opinion is that this is a behaviour nobody would expect even in an entry level scope. It has gone unnoticed because people did not question the timing of well behaved signals at certain sample rates and memory depths only.
This is an embarassing bug because it seems to be caused by flawed implementation of a mathematical routine.
If it does that at 24M of memory and 4 MS/s when the time base is a 0.5 and 5 seconds, maybe it will happen with other combinations of memory depth, sampling rate and timebase. A 20% error in timing when you have no idea when it could happen, makes this scope more similar to a reverse slot machine. If you get the wrong combination, you hit the jackasspot.
Luckily nobody would use an entry level hobbyist scope for anything serious but I wonder if this bug happens in the next level line up, such as the DS2000 series.
If the firmware are based on the same mathematical algorithms, it might as well be the case...
It is intersting speculation if think...
Someone have this in place where all instruments need have calibration certification.
Now it they send also this to cal lab who check instruments and write certificates. Who believe this Rigol get pass in tests and after then it have certificate and cert. sticker somewhere in front panel etc. All happy use it and measured results are now made with nist traceable cal. instrument.
Who have enough loose money for test. Send it with company other instruments to get new cal cert and without any notice about possible err.
My forecast is, in return shipment there is one Rigol what have certificate.
How about Rigol own factory calibration where they proof that it meets its specifications and even list about test instruments used for check/factory calibration and nice story how they are traceable to standards. This case show that whole process is just bullshit. They have not made any kind of full calibration check as it need do. If they have done it, it is not now here. How factory own calibration check can not detect this error what is not small. 4% abs acc. in vertical... blah...not big alarm if happen.
But this error is in horizontal time axis! This is controlled by scope TCXO or what ever timebase what have some specifications in ppm units. This make big difference.
Perhaps if take this 24M data out and analyze it carefully and calculate some things with some experimental tests may tell is it partially math problem in scaling to display or is it also in measurement small intermediate buffer or is it in whole true capture memory.
In capture memory there is not floating "math" if they have done it right. There must be only decimation.
I do not know what is real ADC samplerate in Rigol ADC in every cases.
Is it possible to check if this failure also exist with same other settings but also then with 1, 2 and 3 or 4 channels in use simultaneously. Yes it drops true ADC samplerate and of course divide also memory.
If ONE Ch is in use and very slow time scale... like example 500ms/div do they take all 4 ADC interleaved samples and decimate it or do they take just only one ADC and decimate it to get decimated 4MSa/s.
Speculation. If they use full 4 ADC interleaved stream when one Channel open. it is 1GSa/s. For 4MSa/s it need decimate by 250. (drop out 249 and keep one and forward it to capture memory)
If they use only one ADC (no interleaving) there is 250Msa/s true ADC speed (well enough also for this timescale). For 4MSa/s they need decimate by 62.5 what is impossible except is do not use constant decimation but 62 and 63 sequentially what is ... not so nice.
If it is just pure decimation error it can perhaps detect when look saved acquisition memory with computer and using some external test signals.
But, how factory have tested it so that they can write proof that it meet its specifications. Of course it need check with every single time scale. This is how we have done it in tens of years when there was only analog scopes and still it need do so today and for every timescale just for detect some programming errors in production. After it is checked then do not need every time check every time scale. Program do not drift. In analog scopes even this need check in every calibration procedure, as can see example in Tektronix old full service and calibration manuals.
Is it possible that this have happen accidentally in some FW update or have this error been always after product start what is then even really more severe because then it tell they have never true checked it for proof specifications.
They think, as many chinese companies, that customers are good for do beta tests or even more early tests.
One bad is that some cases in some poor company perhaps same peoples do final tests before launching phase who are designing HW and who are programmers. This is real class A mistake.
There need be separate test team who are far away from designers in test phase and more error they find more money they get, minor errors bit less and finding fatal class major error give pocket full of bank notes. This works. Why chinese and indian etc some companies jumps over this like lazy fox.
Some say that this is not serious because it is only hobby scope. Really
Who think hobbyists are some group what can eat what ever shit and crap fnirsijokes.
This instrument have specifications. Who can think specifications can be like joke stories or other trumpths.
This scope you can find in many places in real work and not only as hobbyist hobby corner decoration.
No, this is real and serious error what responsible company need urgently repair and also better if set public Notice about this error so that users can avoid this error trap. I remember well HP and Tektronix Notification and Errata letters before arpanet and now internet.