Yes, split path input buffer have been invented a long time ago – and it’s all the more baffling that most people don’t seem to be aware of it and make it sound as if an oscilloscope frontend still consists of a cascade of differential amplifiers. Maybe some even think it consists of just a high speed OpAmp…
Differential amplifiers are still routine and the highest performance digitizers have differential inputs. Usually the first stage after the low impedance attenuators converts from single ended to differential, and this stage is convenient for adding the combined position and offset signal is introduced.
The various modern PGAs used in oscilloscopes are differential so they follow the same pattern, but since they replace the low impedance attenuators, position and offset are added after. DSOs with a separate offset control will add it before the PGA. Old designs which do this have to somehow add the offset before some of the attenuation stages which means moving some of the attenuators to the differential part of the signal chain which is relatively expensive.
It doesn’t make much sense to get philosophic about obsolete designs. We are talking about general purpose DSOs here, which ranges from entry level (low end) up to the midrange, but excludes high end gear, which is specialized and definitely
not general purpose. At one point, at least after the invention of the digital readout, T&M industry noticed that a minimum of DC accuracy and stability was expected. Users were no longer willing to permanently turn the offset control of their scopes just to center the trace, as they used to do with their ancient CROs, but expected a decently stable offset position and some accuracy. So, the split path design has long become universal for all general purpose DSOs – despite its drawbacks, where the most obvious is the overload recovery issue. And this is unavoidable, even by a good design.
Of course we find the cascaded differential stages in almost every HF IC, and in HF instruments like spectrum analyzers it might well be the only amplifier architecture required, but split path has become common in wideband general purpose oscilloscopes since they are supposed to work from DC up to the specified bandwidth.
Btw, there are folks who have managed to build a balanced version of the split path input buffer, so you can have this with balanced inputs too.
If you actually think the LF noise in a split path design would be reduced, you’re forgetting that the LF path has to be attenuated quite a bit (usually up to 10 times) in order to get the desired input protection and a decent offset compensation range. This has to be compensated for by a corresponding gain in the OpAmp. Together with the high source impedance of the divider (which has to have a total resistance of 1 meg) this can raise the noise floor by more than 20 dB below the crossover frequency.
That is a good point that I had forgotten, but the noise can still be lower even in old designs.
Old designs which have two separate x10 high impedance attenuators limit the input range to the buffer to 1/10th the level of new DSOs, so attenuation on the DC path is also lower. The Tektronix 22xx series only attenuates by 1.33.
Luckily for the discussion here, low frequency noise is irrelevant because wideband noise at 20 MHz and higher bandwidths dominates.
It’s not “old designs” that utilize two input attenuators. Of course you cannot build a good scope with vertical gain settings from 500 µV/div up to 10 V/div with just one single attenuator. For instance, every contemporary Siglent DSO has two input attenuator stages. Offset compensation voltage has to be added to the input in order to be effective (otherwise the input stage would require a totally unrealistic high common mode range), so this is part of the LF path of a split path input buffer design and topologically sits between the attenuators and the PGA.
With low attenuation factors you either need high supply rails (old design) or you get only a very low offset compensation range. But does a Tek 22xx even have a split path design? The specifications of up to one division trace shift for variable gain and trace invert make me wonder. All the more so as the best sensitivity is not particularly high at 2 mV/div. Or maybe they use the cheapest FET-OpAmp with high Offset voltage and -drift without self-calibration in the LF path – but this would somehow scotch the whole idea of the split path approach?
Above some 100 kHz the situation eases a lot and at 10 MHz and above we get noise figures in the realm of 2 – 3.5 nV/sqrt(Hz) with proper designs at least from Rohde & Schwarz, LeCroy and Siglent.
So there is no way around the sad fact, that the usual general purpose DSO isn’t well suited for precision work at low frequencies because of the steeply rising noise floor down there.
I agree but if you include older instruments, then some general purposes DSOs are much better than others at low and/or high frequencies. I have not tested enough modern low end DSOs to know if they all have subpar noise performance. Even with older instruments though, I gave up on good low noise performance a long time ago with the exception of anything with the Tektronix 5A22/7A22/AM502.
At low frequencies it is relatively easy to make a low noise amplifier, but since oscilloscopes lack the noise marker function for their FFT, I would like to have a low noise dynamic signal analyzer instead.
I do not know what you mean by “low end” DSOs. We are talking about serious instruments here, so low end would be the entry level class. But the problem is not limited to these – all contemporary scopes up to the upper midrange have the very same problem: rising noise at very low frequencies because of the special conditions in a split path input buffer design.
If someone needs a superb instrument for low frequencies, then a Picoscope 4262 is one of the few options – apart from a DSA, that is. The 4262 only has 5 MHz bandwidth, but it is true 16 bits, has an SFDR of >96 dB and a near constant noise density from DC to its upper bandwidth limit.
It is either HiRes or ERES - I'm not quite sure - but in any case it is a true acquisition mode, in the sense of a real time pre-processing. The sample memory gets halved in this mode, because it is expanded to 16 bits width as the captured raw data now consists of 10 bit samples. All the post processing, measurements and math are now using the 10 bit data. The firmware cannot tell the difference between this resolution enhancement (implemented in the FPGA) or a true 10 bit ADC.
Old Tektronix DSOs used 16-bit acquisition and processing memory so high resolution mode did not halve the record length.
I was talking about a Siglent SDS2000X Plus, which provides 200 Mpts memory per channel pair, hence there is some headroom for this. How long was the memory in said old Tektronix DSOs?