Ok, we have a reference for the 50k/s acquisitions on the Rigol topping out at 20ns per division:
Setting the memory to 14k was needed to achieve close to the banner acquisition rate according to that reviewer, what was the sample rate achieved?
Yes, thats a difference of 23% in the time taken to capture a glitch that isnt correlated to the trigger. Not a 23% difference in the dead time as you keep claiming.
What do you think is the difference in dead time?
It's 23%. The guy doesn't understand basic mathematical extrapolation.
We return to the original post of mine stating the dead time as measure in units of time:
dead_time = 1/(update_rate) - record_length
You provided the quote to the dead time expressed as a percentage of realtime:
%dead_time = 100 x (1 – (update_rate * record_time))
These are consistent with the terminology used in both the Rhode & Schwartz and Agilent documents on the topic.
So lets compare these for the two scopes at your magic number of 20ns
1s/54,000 - 20ns * 10 = 18319 ns
1s/50,000 - 20ns * 14 = 19720 ns
the dead times are within 10%
How about calculating the dead time as a percentage of realtime
100 x (1 - (54,000 * 20ns * 10)) = 98.92%
100 X (1 - (50,000 * 20ns * 14)) = 98.6%
expressed as a percentage of realtime they are .32% different
Quoting differences in test time is completely different. Welcome to engineering, terminology is both terse and precise.