Author Topic: How much noise floor and other things matter in oscilloscope usability  (Read 47124 times)

0 Members and 1 Guest are viewing this topic.

Offline rf-loop

  • Super Contributor
  • ***
  • Posts: 4131
  • Country: fi
  • Born in Finland with DLL21 in hand
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #100 on: December 26, 2021, 07:19:18 am »


I believe that many very-high-BW scopes now have more than a first-order roll off and thus a flatter frequency response but poorer step response. I

As is also of course in not-very-high-BW  Siglent SDS6k.
"poorer" step response...  what can also turn to "less aliasing" step response.
BEV of course. Cars with smoke exhaust pipes - go to museum. In Finland quite all electric power is made using nuclear, wind, solar and water.

Wises must compel the mad barbarians to stop their crimes against humanity. Where have the (strong)wises gone?
 

Offline jonpaul

  • Super Contributor
  • ***
  • Posts: 3532
  • Country: fr
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #101 on: December 26, 2021, 10:38:55 am »
Bonjour just seen this long thread now.

Suggest the OP check the many fine books and papers on noise, measurement, reduction, definition.

The noise "floor" is a function of the resistance, bandwidth, temperature.
Averaging is possible only on repetitive signals.
Noise may be irrelevant in some digital systems, but or primary importance in fine instrumentation, audio, etc.
Think of microphone preamps, seismic pickups, photomultipliers, etc.

A  digital scope and analog are "different animals" and have various benefits and downsides.

Many options for preamps, diff amps, etc. The best we have seen are the TEK 7000  plugin 7A22.

Finally the Chine scopes  may have misleading specs and hidden faults,   the cheapest implementation.

We have used the classic Tektronix scopes since 1967.

Just my reflections!

Bon Chance,

Jon




Jean-Paul  the Internet Dinosaur
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1308
  • Country: de
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #102 on: December 26, 2021, 11:09:48 am »
The SDS2000 series has an excellent software enhanced 10 bit mode, which limits the bandwidth to 100 MHz and lowers the noise floor even more. See the next screenshot.

The spectrum reminds me on the typical frequency response of a 8-tap moving average filter (possibly in addition to other filters).
Is this the well-known HiRes mode, or yet a different mode?
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17037
  • Country: us
  • DavidH
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #103 on: December 26, 2021, 11:34:57 am »
Myth #2: “Frontends with higher bandwidths are always noisy, even when bandwidth limited.”

I think the statement, at least the one I'm thinking of, was that the noise density was higher for higher-BW capable amplifiers.  The noise will still be a function of the noise density and the actual bandwidth, so limiting BW will  still reduce noise as expected.

Well, the FFT plots in my screenshots show nothing but the noise density. Of course, the total noise is actually higher for the 2 GHz instrument at full bandwidth than what it could ever be on the SDS2kX Plus.

I strongly suggest that, other than in the seventies of the last century, for modern semiconductors the noise density remains fairly constant over frequency. In my screenshots it can be seen that it gets rather lower at higher frequencies and four times the system bandwidth doesn’t mean higher noise density at all.

For a given transistor technology and construction, there is a tradeoff between bandwidth and noise density, so for instanced a 2N3822 JFET supporting a bandwidth up to 115 MHz (1) has a noise density of about 3.5 nV/Sqrt(Hz) while a 2N4416 JFET supporting a bandwidth up to 250 MHz has a noise density of about 6 nV/Sqrt(Hz).  If the later is used in a 100 MHz amplifier, it results in higher noise than the lower performance part even with the same bandwidth.

Further, in general MOSFETs are noisier than JFETs which are noisier than bipolar transistors, which may be an issue with modern instruments which are more likely to rely on RF MOSFETs instead of RF JFETs for their input buffer.  It is difficult to make an analytical comparison here even if we know what part is being used because the RF MOSFETs are not as well characterized for noise.  This also means that the noise from a bipolar stage following the high impedance input buffer should be of no significance.

The above does not apply to higher bandwidth instruments that use exotic and effective unavailable to us technologies.  Specialized transistors on exotic processes will have a completely different figure of merit for bandwidth and noise compared to silicon MOSFETs, JFETs, and bipolar transistors, but the general rule about the tradeoff between them still applies.  But these inexpensive DSOs up to 350 or maybe even 500 MHz are not using anything like that.

And of course none of the above says anything about poor design.  Noise could be in excess of the predicted front end noise for lots of different reasons.  Empirical measurement is king here and easy to do in this case if the oscilloscope can report peak-to-peak or AC RMS (standard deviation) measurements.  (2)

(1) As a source follower where Ft = Gm / (2 Pi C); the transistor used for the high impedance buffer needs high transconductance and low capacitance.  High transconductance reduces noise, up to a point where other internal noise sources dominate, but the construction for low capacitance increases it.

(2) Which reminds me to suggest being a little cagey about Rigol's RMS and standard deviation measurements on a noise waveform, or any instrument which makes measurements on the display record.  I have seen evidence in that past that the processing to produce the display record corrupts these measurements when applied to noise.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17037
  • Country: us
  • DavidH
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #104 on: December 26, 2021, 11:38:41 am »
Quote
The picture show 10 Ms/s. So there is anditional BW limit (~ 5 MHz) there. To do a fair comparison one would have the switch the faster scope also a slower hirizontal rate to get the lower sampling rate.

I'm not sure there will be a 5MHz bandwidth limit. With the 30MHz limiter enabled I think the bandwidth limit for signals is still 30MHz on this scope even at low sample rates. One would have to be wary of aliasing but the signal being viewed here is noise.

At low sample rates the noise within the ADC input bandwidth simply gets aliased to lower frequencies.  The total noise remains the same.

For low frequencies, things are a lot more complex than just a FET buffer, because of the split path design of all contemporary wideband frontend designs. The practical consequence is, that general purpose (wideband) oscilloscopes generally aren’t well suited for low frequency tasks below about 10 kHz regardless of the probes used. There are specialized instruments for this.

Split path high impedance buffers started showing up not long after integrated low input bias current operational amplifiers in the 1970s.  The split path actually reduces low frequency noise because even a noisy operational amplifier has lower flicker noise than the RF FET used for the high impedance buffer.  Sometimes it is a lot lower.

The disadvantage of the split path design is that without careful consideration, overload recovery can be horrible.

The SDS2000 series has an excellent software enhanced 10 bit mode, which limits the bandwidth to 100 MHz and lowers the noise floor even more. See the next screenshot.

The spectrum reminds me on the typical frequency response of a 8-tap moving average filter (possibly in addition to other filters).
Is this the well-known HiRes mode, or yet a different mode?

High resolution mode is usually or always implemented as a boxcar averaging filter for simplicity since it must operate at the maximum sample rate during decimation, so it should produce something like a sinc response which is what is shown.
« Last Edit: December 26, 2021, 11:42:17 am by David Hess »
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1308
  • Country: de
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #105 on: December 26, 2021, 12:38:08 pm »
High resolution mode is usually or always implemented as a boxcar averaging filter for simplicity since it must operate at the maximum sample rate during decimation, so it should produce something like a sinc response which is what is shown.

In the screeshot, obviously a 8-tap boxcar averaging filter (or similar) was applied, but without down-sampling, otherwise the 3 side-lobes were no longer visible in the spectrum, but already folded down to the first Nyquist zone of the lower sampling rate. So I was just wondering, whether this was really "HiRes" mode (in the sense of LeCroy's definition), or rather a different mode which just applies a post-acquisition filter.

Indeed, when it must run in real-time, during acquision, then a boxcar filter has of course the simplicity advantage that it can be implement as CIC filter, not requiring any multiplications.
Given the huge memory depths available today, a scope manufacturer may be tempted, though, to renounce capture-time DSP at all, and support only post-acquisition filters, for cost reasons.
 

Offline Performa01

  • Super Contributor
  • ***
  • Posts: 1701
  • Country: at
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #106 on: December 26, 2021, 12:45:46 pm »
For low frequencies, things are a lot more complex than just a FET buffer, because of the split path design of all contemporary wideband frontend designs. The practical consequence is, that general purpose (wideband) oscilloscopes generally aren’t well suited for low frequency tasks below about 10 kHz regardless of the probes used. There are specialized instruments for this.

Split path high impedance buffers started showing up not long after integrated low input bias current operational amplifiers in the 1970s.  The split path actually reduces low frequency noise because even a noisy operational amplifier has lower flicker noise than the RF FET used for the high impedance buffer.  Sometimes it is a lot lower.

The disadvantage of the split path design is that without careful consideration, overload recovery can be horrible.
Yes, split path input buffer have been invented a long time ago – and it’s all the more baffling that most people don’t seem to be aware of it and make it sound as if an oscilloscope frontend still consists of a cascade of differential amplifiers. Maybe some even think it consists of just a high speed OpAmp…

If you actually think the LF noise in a split path design would be reduced, you’re forgetting that the LF path has to be attenuated quite a bit (usually up to 10 times) in order to get the desired input protection and a decent offset compensation range. This has to be compensated for by a corresponding gain in the OpAmp. Together with the high source impedance of the divider (which has to have a total resistance of 1 meg) this can raise the noise floor by more than 20 dB below the crossover frequency.

So there is no way around the sad fact, that the usual general purpose DSO isn’t well suited for precision work at low frequencies because of the steeply rising noise floor down there.
 

Offline Performa01

  • Super Contributor
  • ***
  • Posts: 1701
  • Country: at
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #107 on: December 26, 2021, 12:57:16 pm »
The SDS2000 series has an excellent software enhanced 10 bit mode, which limits the bandwidth to 100 MHz and lowers the noise floor even more. See the next screenshot.

The spectrum reminds me on the typical frequency response of a 8-tap moving average filter (possibly in addition to other filters).
Is this the well-known HiRes mode, or yet a different mode?

It is either HiRes or ERES - I'm not quite sure - but in any case it is a true acquisition mode, in the sense of a real time pre-processing. The sample memory gets halved in this mode, because it is expanded to 16 bits width as the captured raw data now consists of 10 bit samples. All the post processing, measurements and math are now using the 10 bit data. The firmware cannot tell the difference between this resolution enhancement (implemented in the FPGA) or a true 10 bit ADC.

 
The following users thanked this post: gf

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17037
  • Country: us
  • DavidH
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #108 on: December 26, 2021, 08:12:51 pm »
Indeed, when it must run in real-time, during acquision, then a boxcar filter has of course the simplicity advantage that it can be implement as CIC filter, not requiring any multiplications.
Given the huge memory depths available today, a scope manufacturer may be tempted, though, to renounce capture-time DSP at all, and support only post-acquisition filters, for cost reasons.

The implementations I have seen all used a power-of-2 number of samples so the filter could be implemented with only adds and shifts, and if promoting 8-bit acquisitions to a 16-bit record, only adds.  Modern low end DSOs usually only produce an 8-bit acquisition record but all of the old Tektronix DSOs promoted 8 and 10 bit samples to 16-bits immediately and did all processing in 16-bits.  Tektronix was very scrupulous at one time.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17037
  • Country: us
  • DavidH
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #109 on: December 26, 2021, 09:13:48 pm »
Yes, split path input buffer have been invented a long time ago – and it’s all the more baffling that most people don’t seem to be aware of it and make it sound as if an oscilloscope frontend still consists of a cascade of differential amplifiers. Maybe some even think it consists of just a high speed OpAmp…

Differential amplifiers are still routine and the highest performance digitizers have differential inputs.  Usually the first stage after the low impedance attenuators converts from single ended to differential, and this stage is convenient for adding the combined position and offset signal is introduced.

The various modern PGAs used in oscilloscopes are differential so they follow the same pattern, but since they replace the low impedance attenuators, position and offset are added after.  DSOs with a separate offset control will add it before the PGA.  Old designs which do this have to somehow add the offset before some of the attenuation stages which means moving some of the attenuators to the differential part of the signal chain which is relatively expensive.

Quote
If you actually think the LF noise in a split path design would be reduced, you’re forgetting that the LF path has to be attenuated quite a bit (usually up to 10 times) in order to get the desired input protection and a decent offset compensation range. This has to be compensated for by a corresponding gain in the OpAmp. Together with the high source impedance of the divider (which has to have a total resistance of 1 meg) this can raise the noise floor by more than 20 dB below the crossover frequency.

That is a good point that I had forgotten, but the noise can still be lower even in old designs.

Old designs which have two separate x10 high impedance attenuators limit the input range to the buffer to 1/10th the level of new DSOs, so attenuation on the DC path is also lower.  The Tektronix 22xx series only attenuates by 1.33.

Luckily for the discussion here, low frequency noise is irrelevant because wideband noise at 20 MHz and higher bandwidths dominates.

Quote
So there is no way around the sad fact, that the usual general purpose DSO isn’t well suited for precision work at low frequencies because of the steeply rising noise floor down there.

I agree but if you include older instruments, then some general purposes DSOs are much better than others at low and/or high frequencies.  I have not tested enough modern low end DSOs to know if they all have subpar noise performance.  Even with older instruments though, I gave up on good low noise performance a long time ago with the exception of anything with the Tektronix 5A22/7A22/AM502.

At low frequencies it is relatively easy to make a low noise amplifier, but since oscilloscopes lack the noise marker function for their FFT, I would like to have a low noise dynamic signal analyzer instead.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17037
  • Country: us
  • DavidH
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #110 on: December 26, 2021, 09:16:42 pm »
It is either HiRes or ERES - I'm not quite sure - but in any case it is a true acquisition mode, in the sense of a real time pre-processing. The sample memory gets halved in this mode, because it is expanded to 16 bits width as the captured raw data now consists of 10 bit samples. All the post processing, measurements and math are now using the 10 bit data. The firmware cannot tell the difference between this resolution enhancement (implemented in the FPGA) or a true 10 bit ADC.

Old Tektronix DSOs used 16-bit acquisition and processing memory so high resolution mode did not halve the record length.
 

Offline Performa01

  • Super Contributor
  • ***
  • Posts: 1701
  • Country: at
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #111 on: December 27, 2021, 01:14:52 am »
Yes, split path input buffer have been invented a long time ago – and it’s all the more baffling that most people don’t seem to be aware of it and make it sound as if an oscilloscope frontend still consists of a cascade of differential amplifiers. Maybe some even think it consists of just a high speed OpAmp…

Differential amplifiers are still routine and the highest performance digitizers have differential inputs.  Usually the first stage after the low impedance attenuators converts from single ended to differential, and this stage is convenient for adding the combined position and offset signal is introduced.

The various modern PGAs used in oscilloscopes are differential so they follow the same pattern, but since they replace the low impedance attenuators, position and offset are added after.  DSOs with a separate offset control will add it before the PGA.  Old designs which do this have to somehow add the offset before some of the attenuation stages which means moving some of the attenuators to the differential part of the signal chain which is relatively expensive.
It doesn’t make much sense to get philosophic about obsolete designs. We are talking about general purpose DSOs here, which ranges from entry level (low end) up to the midrange, but excludes high end gear, which is specialized and definitely not general purpose. At one point, at least after the invention of the digital readout, T&M industry noticed that a minimum of DC accuracy and stability was expected. Users were no longer willing to permanently turn the offset control of their scopes just to center the trace, as they used to do with their ancient CROs, but expected a decently stable offset position and some accuracy. So, the split path design has long become universal for all general purpose DSOs – despite its drawbacks, where the most obvious is the overload recovery issue. And this is unavoidable, even by a good design.

Of course we find the cascaded differential stages in almost every HF IC, and in HF instruments like spectrum analyzers it might well be the only amplifier architecture required, but split path has become common in wideband general purpose oscilloscopes since they are supposed to work from DC up to the specified bandwidth.

Btw, there are folks who have managed to build a balanced version of the split path input buffer, so you can have this with balanced inputs too.


Quote
If you actually think the LF noise in a split path design would be reduced, you’re forgetting that the LF path has to be attenuated quite a bit (usually up to 10 times) in order to get the desired input protection and a decent offset compensation range. This has to be compensated for by a corresponding gain in the OpAmp. Together with the high source impedance of the divider (which has to have a total resistance of 1 meg) this can raise the noise floor by more than 20 dB below the crossover frequency.

That is a good point that I had forgotten, but the noise can still be lower even in old designs.

Old designs which have two separate x10 high impedance attenuators limit the input range to the buffer to 1/10th the level of new DSOs, so attenuation on the DC path is also lower.  The Tektronix 22xx series only attenuates by 1.33.

Luckily for the discussion here, low frequency noise is irrelevant because wideband noise at 20 MHz and higher bandwidths dominates.
It’s not “old designs” that utilize two input attenuators. Of course you cannot build a good scope with vertical gain settings from 500 µV/div up to 10 V/div with just one single attenuator. For instance, every contemporary Siglent DSO has two input attenuator stages. Offset compensation voltage has to be added to the input in order to be effective (otherwise the input stage would require a totally unrealistic high common mode range), so this is part of the LF path of a split path input buffer design and topologically sits between the attenuators and the PGA.

With low attenuation factors you either need high supply rails (old design) or you get only a very low offset compensation range. But does a Tek 22xx even have a split path design? The specifications of up to one division trace shift for variable gain and trace invert make me wonder. All the more so as the best sensitivity is not particularly high at 2 mV/div. Or maybe they use the cheapest FET-OpAmp with high Offset voltage and -drift without self-calibration in the LF path – but this would somehow scotch the whole idea of the split path approach?

Above some 100 kHz the situation eases a lot and at 10 MHz and above we get noise figures in the realm of 2 – 3.5 nV/sqrt(Hz) with proper designs at least from Rohde & Schwarz, LeCroy and Siglent.


Quote
So there is no way around the sad fact, that the usual general purpose DSO isn’t well suited for precision work at low frequencies because of the steeply rising noise floor down there.

I agree but if you include older instruments, then some general purposes DSOs are much better than others at low and/or high frequencies.  I have not tested enough modern low end DSOs to know if they all have subpar noise performance.  Even with older instruments though, I gave up on good low noise performance a long time ago with the exception of anything with the Tektronix 5A22/7A22/AM502.

At low frequencies it is relatively easy to make a low noise amplifier, but since oscilloscopes lack the noise marker function for their FFT, I would like to have a low noise dynamic signal analyzer instead.
I do not know what you mean by “low end” DSOs. We are talking about serious instruments here, so low end would be the entry level class. But the problem is not limited to these – all contemporary scopes up to the upper midrange have the very same problem: rising noise at very low frequencies because of the special conditions in a split path input buffer design.

If someone needs a superb instrument for low frequencies, then a Picoscope 4262 is one of the few options – apart from a DSA, that is. The 4262 only has 5 MHz bandwidth, but it is true 16 bits, has an SFDR of >96 dB and a near constant noise density from DC to its upper bandwidth limit.


It is either HiRes or ERES - I'm not quite sure - but in any case it is a true acquisition mode, in the sense of a real time pre-processing. The sample memory gets halved in this mode, because it is expanded to 16 bits width as the captured raw data now consists of 10 bit samples. All the post processing, measurements and math are now using the 10 bit data. The firmware cannot tell the difference between this resolution enhancement (implemented in the FPGA) or a true 10 bit ADC.

Old Tektronix DSOs used 16-bit acquisition and processing memory so high resolution mode did not halve the record length.
I was talking about a Siglent SDS2000X Plus, which provides 200 Mpts memory per channel pair, hence there is some headroom for this. How long was the memory in said old Tektronix DSOs?

« Last Edit: December 27, 2021, 01:16:41 am by Performa01 »
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17037
  • Country: us
  • DavidH
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #112 on: December 27, 2021, 03:36:58 am »
It doesn’t make much sense to get philosophic about obsolete designs. We are talking about general purpose DSOs here, which ranges from entry level (low end) up to the midrange, but excludes high end gear, which is specialized and definitely not general purpose.

The point is that the designs have not changed much.  The PGA has replaced the low impedance switched attenuator but engineers then and now are solving the same problems.  Early singed ended input digitizers were replaced with differential input digitizers, and differential signal paths were ultimately kept.  Everything is more integrated now of course.

Quote
At one point, at least after the invention of the digital readout, T&M industry noticed that a minimum of DC accuracy and stability was expected. Users were no longer willing to permanently turn the offset control of their scopes just to center the trace, as they used to do with their ancient CROs, but expected a decently stable offset position and some accuracy. So, the split path design has long become universal for all general purpose DSOs – despite its drawbacks, where the most obvious is the overload recovery issue. And this is unavoidable, even by a good design.

The move toward the split-path design was not driven by performance; it was about cost.  It happened as soon as low cost monolithic low input current operational amplifiers became available.  The cost savings came from replacing the discrete dual matched JFET with a single unselected JFET even though the split-path design requires trimming of the compensation or gain or both.

Quote
Btw, there are folks who have managed to build a balanced version of the split-path input buffer, so you can have this with balanced inputs too.

Haha, I am one of those folks, but it was much lower noise, impedance, and bandwidth for low level DC differential amplification.  I extended and improved an existing single ended design to fully differential and it worked perfectly on the first try, which pleasantly surprised me.

Quote
It’s not “old designs” that utilize two input attenuators. Of course you cannot build a good scope with vertical gain settings from 500 µV/div up to 10 V/div with just one single attenuator. For instance, every contemporary Siglent DSO has two input attenuator stages. Offset compensation voltage has to be added to the input in order to be effective (otherwise the input stage would require a totally unrealistic high common mode range), so this is part of the LF path of a split path input buffer design and topologically sits between the attenuators and the PGA.

Modern "budget" DSOs use only one input attenuator, which places much greater demands on the input buffer to handle larger signal levels.  The mid-tier models I have considered still use two input attenuators.  The presence of two input attenuators might be a good way to divide the lowest end budget DSOs from the next level up in performance.

Quote
With low attenuation factors you either need high supply rails (old design) or you get only a very low offset compensation range. But does a Tek 22xx even have a split path design? The specifications of up to one division trace shift for variable gain and trace invert make me wonder. All the more so as the best sensitivity is not particularly high at 2 mV/div. Or maybe they use the cheapest FET-OpAmp with high Offset voltage and -drift without self-calibration in the LF path – but this would somehow scotch the whole idea of the split path approach?

The Tektronix 22xx series does as shown below, and it might have been the first split-path design from them, but not all stages have balance adjustments in the 22xx series.  It is split-path but DC coupled and the operational amplifier controls the source current of the JFET to produce the DC and low frequency output.  Steve Roach discussed DC and AC coupled split-path designs in his article about oscilloscope signal conditioning.

The offset null is used for balance which is a terrible idea for precision, but probably good enough for an oscilloscope.  That might explain why one channel of one of mine has noticeable warmup drift.

When I studied the design in detail years ago with an eye toward noise analysis, I got the feeling that the Tektronix engineers paid attention to proper distribution of noise and gain.

Sensitivity was limited to 2 mV/div simply because greater sensitivity would require another preamplifier stage and noise was already greater than trace width, which seems funny now that modern oscilloscopes put up with even more noise.  It is not shown below but the basic sensitivity is 5 mV/div.  2 mV/div relies on increasing gain by 2.5 times in the preamplifier instead of removing attenuation which was pretty common at the time but has disadvantages.

The more modern AC coupled split-path amplifier allows AC and DC coupling to be implemented with the low frequency path instead of a high voltage RF relay which is a major advantage.

Quote
It is either HiRes or ERES - I'm not quite sure - but in any case it is a true acquisition mode, in the sense of a real time pre-processing. The sample memory gets halved in this mode, because it is expanded to 16 bits width as the captured raw data now consists of 10 bit samples. All the post processing, measurements and math are now using the 10 bit data. The firmware cannot tell the difference between this resolution enhancement (implemented in the FPGA) or a true 10 bit ADC.

Old Tektronix DSOs used 16-bit acquisition and processing memory so high resolution mode did not halve the record length.

I was talking about a Siglent SDS2000X Plus, which provides 200 Mpts memory per channel pair, hence there is some headroom for this. How long was the memory in said old Tektronix DSOs?

The maximum record length on those old DSOs is tiny by modern standards at only 4k, but even though fast RAM was expensive in both cost and size, they still made it twice as wide as needed.  Processing in the modern way would have doubled the record length without increasing the amount of installed memory.  Tektronix would later advertise this as a "no compromise" feature.
 
The following users thanked this post: 2N3055

Offline Performa01

  • Super Contributor
  • ***
  • Posts: 1701
  • Country: at
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #113 on: December 27, 2021, 11:31:39 am »
The move toward the split-path design was not driven by performance; it was about cost.  It happened as soon as low cost monolithic low input current operational amplifiers became available.  The cost savings came from replacing the discrete dual matched JFET with a single unselected JFET even though the split-path design requires trimming of the compensation or gain or both.
Well, of course cost might have been a major consideration, even though I cannot see why back then a dual matched FET should have been more expensive than an IC that contains basically the same plus a bunch of additional transistors and other components. Today it’s a different story of course, because these are hard to get and expensive spare parts now, but back in the seventies a dual FET was about as affordable (or rather expensive) as a JFET OpAmp (like LF356) as far as I remember.

The discrete differential stages usually did require trimming of the “offset balance”, as far as I remember the old circuit diagrams of up to 300 MHz frontends that did not use a split path topology.
Even though your circuit diagram shows three trimmers, I don’t think we’ve seen this in recent designs. Self calibration takes care of the offset error and with modern low tolerance parts in the input and feedback networks the balance between both paths and the transition at the crossover frequency are good enough even without adjustments.

Quote
Btw, there are folks who have managed to build a balanced version of the split-path input buffer, so you can have this with balanced inputs too.

Haha, I am one of those folks, but it was much lower noise, impedance, and bandwidth for low level DC differential amplification.  I extended and improved an existing single ended design to fully differential and it worked perfectly on the first try, which pleasantly surprised me.
Congrats – my hat goes off to you! This was (and still is) true design work, not very common anymore…

The Tektronix 22xx series does as shown below, and it might have been the first split-path design from them, but not all stages have balance adjustments in the 22xx series.  It is split-path but DC coupled and the operational amplifier controls the source current of the JFET to produce the DC and low frequency output.  Steve Roach discussed DC and AC coupled split-path designs in his article about oscilloscope signal conditioning.

The offset null is used for balance which is a terrible idea for precision, but probably good enough for an oscilloscope.  That might explain why one channel of one of mine has noticeable warmup drift.

When I studied the design in detail years ago with an eye toward noise analysis, I got the feeling that the Tektronix engineers paid attention to proper distribution of noise and gain.

Sensitivity was limited to 2 mV/div simply because greater sensitivity would require another preamplifier stage and noise was already greater than trace width, which seems funny now that modern oscilloscopes put up with even more noise.  It is not shown below but the basic sensitivity is 5 mV/div.  2 mV/div relies on increasing gain by 2.5 times in the preamplifier instead of removing attenuation which was pretty common at the time but has disadvantages.

The more modern AC coupled split-path amplifier allows AC and DC coupling to be implemented with the low frequency path instead of a high voltage RF relay which is a major advantage.
Thanks for the excerpt from the circuit diagram. It is quite interesting.

Yes, I’ve immediately noticed that it’s only DC coupled, which means a number of drawbacks, particularly the fact that the input goes open circuit in AC coupled mode, whereas good designs are supposed to have a constant input impedance regardless of the input coupling, or any other settings for that matter.

The LF-path also doesn’t provide the offset control usually found in DSOs – just because it really is best placed here. But yes, with the low division ratio of the LF input network, the compensation range could not be huge anyway. Nevertheless I have to assume that the offset adjustment is done at a later stage, which means that it actually relies on the usable common mode range of the input buffer – which will of course work to a certain degree because of the relatively high rail voltages of +/- 8.6 V.

A maximum sensitivity of 2 mV/div means 16 mVpp full scale. Even 5 mV/div is equivalent to 40 mVpp FS. Since this is hardly enough to drive the plates of a CRT, there has to be a lot of amplification after the programmable attenuator. In a DSO, the ADC would require at the very least several hundred millivolts (but usually up to two volts) full scale for proper operation. This is why integrated PGAs do not only provide attenuation, but amplification as well. Consequently, as the signal needs to be amplified anyway, there’s no need to stop at 2 mV/div. With 20 MHz bandwidth limit the total noise in a proper low noise design can be as low as 20 µVrms, so this should not be a problem for the trace width.

 
The following users thanked this post: 2N3055

Offline G0HZU

  • Super Contributor
  • ***
  • Posts: 3152
  • Country: gb
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #114 on: December 27, 2021, 02:40:05 pm »
Quote
If someone needs a superb instrument for low frequencies, then a Picoscope 4262 is one of the few options – apart from a DSA, that is. The 4262 only has 5 MHz bandwidth, but it is true 16 bits, has an SFDR of >96 dB and a near constant noise density from DC to its upper bandwidth limit.
Yes, I've seen these and there are also some alternatives. Very tempting. At the moment I sometimes use a Tek RSA3408A 8.5GHz RTSA for looking at low frequency stuff. This has a low noise floor and it has the advantage (for me at least) of having a 50 ohm input impedance. The Picoscope should be a bit better although it is limited to a 5MHz BW. The Tek analyser can capture 40MHz but it is only a 14bit system.

 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17037
  • Country: us
  • DavidH
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #115 on: December 27, 2021, 03:27:38 pm »
The move toward the split-path design was not driven by performance; it was about cost.  It happened as soon as low cost monolithic low input current operational amplifiers became available.  The cost savings came from replacing the discrete dual matched JFET with a single unselected JFET even though the split-path design requires trimming of the compensation or gain or both.

Well, of course cost might have been a major consideration, even though I cannot see why back then a dual matched FET should have been more expensive than an IC that contains basically the same plus a bunch of additional transistors and other components. Today it’s a different story of course, because these are hard to get and expensive spare parts now, but back in the seventies a dual FET was about as affordable (or rather expensive) as a JFET OpAmp (like LF356) as far as I remember.

Monolithic chips do not require hand grading for precision.  The dual matched parts were graded by hand.  Note that monolithic dual transistors will not work in this application because of parasitic coupling.

Tektronix kept the simpler dual stacked JFET buffer in the trigger circuits where precision was less important.

Quote
The discrete differential stages usually did require trimming of the “offset balance”, as far as I remember the old circuit diagrams of up to 300 MHz frontends that did not use a split path topology.
Even though your circuit diagram shows three trimmers, I don’t think we’ve seen this in recent designs. Self calibration takes care of the offset error and with modern low tolerance parts in the input and feedback networks the balance between both paths and the transition at the crossover frequency are good enough even without adjustments.

The designs Steve Roach shows (attached below) include automated trimming of the gain of the low frequency path.  He briefly mentions noise on page 70 where he discusses the shortcomings of RF MOSFETs.

Quote
Quote
Btw, there are folks who have managed to build a balanced version of the split-path input buffer, so you can have this with balanced inputs too.

Haha, I am one of those folks, but it was much lower noise, impedance, and bandwidth for low level DC differential amplification.  I extended and improved an existing single ended design to fully differential and it worked perfectly on the first try, which pleasantly surprised me.

Congrats – my hat goes off to you! This was (and still is) true design work, not very common anymore…

The part of it that I really liked was adjusting the frequency breakpoint between the fast and slow path for lowest noise using a sampling DC voltmeter.  Low noise was my primary design goal.  Then I went back and measured the frequency of the breakpoint and it was exactly where the noise curves of the slow and fast path crossed, right where it should be.

Quote
Yes, I’ve immediately noticed that it’s only DC coupled, which means a number of drawbacks, particularly the fact that the input goes open circuit in AC coupled mode, whereas good designs are supposed to have a constant input impedance regardless of the input coupling, or any other settings for that matter.

I do not know that one way is better than the other and oscilloscopes did it that way for decades without problems except where a DC return path was required.  AC coupled designs have to sink the gate current somehow which presents its own complications.  The reverse engineered Rigol DS1000Z front end that Dave made shows that the input resistance changes when coupling is switched, which has got to be incorrect, but maybe someone could measure it.  The big advantage of the AC coupled split-path buffer is that coupling can be switched on the low frequency side with a solid state switch.

Quote
The LF-path also doesn’t provide the offset control usually found in DSOs – just because it really is best placed here. But yes, with the low division ratio of the LF input network, the compensation range could not be huge anyway. Nevertheless I have to assume that the offset adjustment is done at a later stage, which means that it actually relies on the usable common mode range of the input buffer – which will of course work to a certain degree because of the relatively high rail voltages of +/- 8.6 V.

The stage following the low impedance attenuator does single ended to differential conversion and that is where offset and position are inserted.  Since gain is fixed after that point, the scaling of the position control is fixed, but it was still also intended to operate as a limited range offset control.

Adjusting offset at the input buffer in this case would alter the transconductance changing the gain and frequency response, but maybe not enough to matter?  Later gain stages include first order correction of bandwidth and gain over temperature.

Quote
A maximum sensitivity of 2 mV/div means 16 mVpp full scale. Even 5 mV/div is equivalent to 40 mVpp FS. Since this is hardly enough to drive the plates of a CRT, there has to be a lot of amplification after the programmable attenuator. In a DSO, the ADC would require at the very least several hundred millivolts (but usually up to two volts) full scale for proper operation. This is why integrated PGAs do not only provide attenuation, but amplification as well. Consequently, as the signal needs to be amplified anyway, there’s no need to stop at 2 mV/div. With 20 MHz bandwidth limit the total noise in a proper low noise design can be as low as 20 µVrms, so this should not be a problem for the trace width.

The worst case input signal range at 50 mV/div, where low impedance attenuation is maximum, is +/- 250 millivolts with overrange.  The peak-to-peak noise is only apparent in digital storage mode.  At the maximum sensitivity of 2 mV/div, the input noise is only just dominates the noise of the following stages.
« Last Edit: December 27, 2021, 03:30:25 pm by David Hess »
 
The following users thanked this post: egonotto

Offline Performa01

  • Super Contributor
  • ***
  • Posts: 1701
  • Country: at
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #116 on: December 27, 2021, 05:13:18 pm »
Quote
If someone needs a superb instrument for low frequencies, then a Picoscope 4262 is one of the few options – apart from a DSA, that is. The 4262 only has 5 MHz bandwidth, but it is true 16 bits, has an SFDR of >96 dB and a near constant noise density from DC to its upper bandwidth limit.
Yes, I've seen these and there are also some alternatives. Very tempting. At the moment I sometimes use a Tek RSA3408A 8.5GHz RTSA for looking at low frequency stuff. This has a low noise floor and it has the advantage (for me at least) of having a 50 ohm input impedance. The Picoscope should be a bit better although it is limited to a 5MHz BW. The Tek analyser can capture 40MHz but it is only a 14bit system.
I just had a closer look - and sadly my previous statement about near constant noise density isn't true. Even though it clearly is not a split path design and the 1/f corner frequency is significantly lower than for the 500 MHz and 2 GHz scopes that I have here, there is still some significant 1/f noise, slowly starting below some 25 kHz. Well, that's obviously the drawback of an 1 Mohms input impedance, requiring a FET input...

Other than the general purpose scopes, there is a major difference between open circuit and 50 ohms termination. Without termination, the noise raises significantly.

The noise density stays below 7 nV/sqrt(Hz) at and above 20 kHz, but gets as high as 102 nV/sqrt(Hz) down at 100 Hz. The first two attached screenshots show the noise spectrum at full sample rate up to 100 kHz and at full bandwidth. The noise density is generally higher than in the general purpose scopes (where it is in the range 2-3.5 nV/sqrt(Hz) at and above 1MHz), which might have to do with the higher sensitivity of these scopes. The Picoscope 4262 is limited to 20 mVpp full scale as the most sensitive range.
EDIT: Caution! this is for AC coupling with incomplete termination, which results in bad LF performance.

Pico_4262_Noise_50_5M_D100k
Pico_4262_Noise_50_5M

Next comes the noise density graph:
EDIT: Caution! this is for AC coupling with incomplete termination, which results in bad LF performance.

Pico_4262_ND_50_5M

A distortion test at 20 kHz

Signal_1V_20kHz

And finally a two tone intermodulation test, demonstrating the SFDR (just look at the cursor measurement; the automatic measurement failed because it obviously isn't intelligent enough to operate on the whole trace):

Signal_IMD_40mV_20-21kHz

EDIT: The noise measurements shown so far did not show the true performance, because they were flawed for two reasons:

1.   The input was AC coupled by accident, which of course increases LF-noise significantly.
2.   The input had a 50 ohm through terminator fitted, but since this scope is sensitive to the source impedance, an additional 50 ohm end terminator should be used to complete the 50 ohms setup.

So I've added the correct measurement results for spectral noise and noise density:

Pico4262_Noise_25_5MHz_D50kHz
Pico_4262_ND_25_5M



« Last Edit: December 28, 2021, 09:49:04 am by Performa01 »
 
The following users thanked this post: egonotto, 2N3055

Offline mawyatt

  • Super Contributor
  • ***
  • Posts: 3714
  • Country: us
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #117 on: December 27, 2021, 05:34:53 pm »
The two tone IMD looks good as one would expect from a "True" 16 bit system. If you don't mind could you do this test at ~1MHz with the Picoscope 4262?

BTW one of the reasons almost everything analogish in complex chips is differential is you can't get a good ground reference on-chip for larger size chips. Later when analog type flip ball bond chips became available the on-chip ground reference was better than with traditional wire bonds since these ball bonds could be located within the chip boundaries as required and thus offered a lower ground impedance.

Best,
Curiosity killed the cat, also depleted my wallet!
~Wyatt Labs by Mike~
 

Offline G0HZU

  • Super Contributor
  • ***
  • Posts: 3152
  • Country: gb
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #118 on: December 27, 2021, 06:08:44 pm »
Quote
I just had a closer look - and sadly my previous statement about near constant noise density isn't true. Even though it clearly is not a split path design and the 1/f corner frequency is significantly lower than for the 500 MHz and 2 GHz scopes that I have here, there is still some significant 1/f noise, slowly starting below some 25 kHz. Well, that's obviously the drawback of an 1 Mohms input impedance, requiring a FET input...

Thanks. The Tek3408A RTSA can be very laggy and frustrating to use at times but it is very powerful. The front end is 50 ohms and the noise figure at low frequencies is about 20dB. I've not looked to see how noisy it is below 1kHz but it's bound to get a bit noisier here.
« Last Edit: December 27, 2021, 06:11:02 pm by G0HZU »
 

Offline Performa01

  • Super Contributor
  • ***
  • Posts: 1701
  • Country: at
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #119 on: December 27, 2021, 06:19:21 pm »
Monolithic chips do not require hand grading for precision.  The dual matched parts were graded by hand.  Note that monolithic dual transistors will not work in this application because of parasitic coupling.
Thanks for the explanation – this makes sense of course.

The designs Steve Roach shows (attached below) include automated trimming of the gain of the low frequency path.  He briefly mentions noise on page 70 where he discusses the shortcomings of RF MOSFETs.
Yes, this well known article is brilliant indeed! Yet we can see its age by looking at the first schematic, figure 7-1: the 50 ohms termination is accomplished by just a resistor, that is connected in parallel to the ordinary high impedance input with its high shunt capacitance – a solution that is barely suitable for scopes with a bandwidth exceeding some 100 MHz.

I do not know that one way is better than the other and oscilloscopes did it that way for decades without problems except where a DC return path was required.  AC coupled designs have to sink the gate current somehow which presents its own complications.  The reverse engineered Rigol DS1000Z front end that Dave made shows that the input resistance changes when coupling is switched, which has got to be incorrect, but maybe someone could measure it.  The big advantage of the AC coupled split-path buffer is that coupling can be switched on the low frequency side with a solid state switch.
It’s been quite some time, but I think I remember that this reverse engineered Rigol schematic has a number of errors in it. Some are more obvious than others. It is a nice means to get an overview, but certainly not suitable to study any circuit details.

Well, just because the AC block has been in the input path for a long time, especially when the bandwidth of a scope was rather low, this does not mean that it is a good thing to have to be prepared for unexpected changes in some major characteristics, when operating a switch that basically just alters the frequency response.

Consider a high impedance (100 Mohm), x100 high voltage probe connected to 2kV. If you now switch to AC coupling by accident, the input DC-block capacitor will charge up. Current is limited by the probe resistance, but after 10 seconds the capacitor might be charged to about 1.9 kV and this is equivalent to some 60 mJ of energy. So if the (supposedly) 400 volts rated capacitor doesn’t break down (and suffers damage or at least permanent degradation), it will send a potentially destructive pulse of electric energy into the frontend as soon as someone connects a low impedance source to the input after that incident.

Adjusting offset at the input buffer in this case would alter the transconductance changing the gain and frequency response, but maybe not enough to matter?  Later gain stages include first order correction of bandwidth and gain over temperature.
Figure 7-3 in your document shows the usual approach where to feed V_offset. In your circuit diagram of the Tek 22xx the offset voltage (delivered from an OpAmp with close to zero output impedance within the LF frequency range) would have to be fed into the lower leg of R98 (after disconnecting it from ground, that is). But with the low division ratio, which clearly is an attempt to keep the LF noise down, there is almost nothing gained, so I can completely understand why it’s done differently in this particular case.

EDIT: Sorry, only now i've checked what you mean. Of course, with the transistor output stage the original approach for offset compensation cannot be used.
« Last Edit: December 27, 2021, 07:17:08 pm by Performa01 »
 

Offline Performa01

  • Super Contributor
  • ***
  • Posts: 1701
  • Country: at
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #120 on: December 27, 2021, 06:41:52 pm »
The two tone IMD looks good as one would expect from a "True" 16 bit system. If you don't mind could you do this test at ~1MHz with the Picoscope 4262?
Here you go - this instrument is not sensitive enough - you can barely see the IM3 products at 990 and 1020 kHz.

Signal_IMD_40mV_1000-1010kHz

 
The following users thanked this post: mawyatt

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17037
  • Country: us
  • DavidH
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #121 on: December 27, 2021, 08:17:11 pm »
The designs Steve Roach shows (attached below) include automated trimming of the gain of the low frequency path.  He briefly mentions noise on page 70 where he discusses the shortcomings of RF MOSFETs.

Yes, this well known article is brilliant indeed! Yet we can see its age by looking at the first schematic, figure 7-1: the 50 ohms termination is accomplished by just a resistor, that is connected in parallel to the ordinary high impedance input with its high shunt capacitance – a solution that is barely suitable for scopes with a bandwidth exceeding some 100 MHz.

It works well with my 150 MHz 2445 and 300 MHz 2440 but they use hybrid construction so the parasitic elements are much less than with a surface mount printed circuit board.  I think the later TDS series did it up to 500 MHz but maybe not because the 1 GHz models obviously could not have.  As mentioned earlier, the old Tektronix 485 with printed board construction did *not* use a switchable termination but instead an RF relay to direct the input to either the high impedance buffer or a separate 50 ohm input, and the specifications reflect it with lower bandwidth in high impedance mode.  At the time I do not think they had a faster JFET high impedance buffer or they would have used it.  I consider the 485 to be a "heroic" engineering effort.

Based on context, I think Steve Roach worked on the 500 MHz and 1 GHz TDS series of oscilloscopes so his article gives an idea about what was going on in the late 1990s and early 2000s.  I believe this makes it particularly useful for emulation in modern amateur designs.

Quote
Consider a high impedance (100 Mohm), x100 high voltage probe connected to 2kV. If you now switch to AC coupling by accident, the input DC-block capacitor will charge up. Current is limited by the probe resistance, but after 10 seconds the capacitor might be charged to about 1.9 kV and this is equivalent to some 60 mJ of energy. So if the (supposedly) 400 volts rated capacitor doesn’t break down (and suffers damage or at least permanent degradation), it will send a potentially destructive pulse of electric energy into the frontend as soon as someone connects a low impedance source to the input after that incident.

Tektronix made high voltage 10x and 100x probes with a built in parallel resistance to avoid that problem.  They can be identified by having a lower than expected input resistance.  Probes like this are still made but they are difficult to find and come with a premium price.

Quote
Figure 7-3 in your document shows the usual approach where to feed V_offset. In your circuit diagram of the Tek 22xx the offset voltage (delivered from an OpAmp with close to zero output impedance within the LF frequency range) would have to be fed into the lower leg of R98 (after disconnecting it from ground, that is). But with the low division ratio, which clearly is an attempt to keep the LF noise down, there is almost nothing gained, so I can completely understand why it’s done differently in this particular case.

EDIT: Sorry, only now i've checked what you mean. Of course, with the transistor output stage the original approach for offset compensation cannot be used.

The 22xx series also did not need a different method because it is based on a traditional analog design where that was a solved problem.  It just represents the last fully documented oscilloscope design along with the 24xx series of analog and digital storage models.

Even so, I consider the 2232 to be the first "modern" DSO design with a recognizable user interface.  It's predecessor, the 2230, has an archaic albeit interesting user interface and really bridges the gap between analog and digital designs.

I do not recommend duplicating the 22xx design, but a lot can still be learned from it.

I wish we had better data on available RF MOSFET noise characteristics.  What is available is intended for RF amplifier applications.
« Last Edit: December 27, 2021, 08:20:27 pm by David Hess »
 

Offline FiorenzoTopic starter

  • Regular Contributor
  • *
  • Posts: 110
  • Country: it
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #122 on: December 27, 2021, 08:28:48 pm »
I would like to thank you everybody for the many replies.
You have been very important and educative to convince me in the decision that in my work It would be better an oscilloscope with a low noise front end than one with a very fast ADC like the Rigol.
I am receiving an sds2104x plus in the next two days so I will do a limited comparison with the Rigol mso5000 that I still have.

Thank you again.
 

Offline G0HZU

  • Super Contributor
  • ***
  • Posts: 3152
  • Country: gb
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #123 on: December 27, 2021, 08:42:30 pm »
Quote
I wish we had better data on available RF MOSFET noise characteristics.  What is available is intended for RF amplifier applications.
Can you measure the noise parameters yourself at audio frequencies? I've done this stuff up at RF and recently measured the s-parameters for the BF998 MOSFET at various bias points across a frequency range of a few MHz up to 3GHz and I also created some noise data for it up at VHF. This noise data gets included in the s-parameter file. I did the same for the old BF981 a few years back with good results when designing amplifiers for low noise figure. I've never tried to do this at audio frequencies though.
 

Offline G0HZU

  • Super Contributor
  • ***
  • Posts: 3152
  • Country: gb
Re: How much noise floor and other things matter in oscilloscope usability
« Reply #124 on: December 27, 2021, 08:46:35 pm »
I would like to thank you everybody for the many replies.
You have been very important and educative to convince me in the decision that in my work It would be better an oscilloscope with a low noise front end than one with a very fast ADC like the Rigol.
I am receiving an sds2104x plus in the next two days so I will do a limited comparison with the Rigol mso5000 that I still have.

Thank you again.
Sounds good! My first digital scope (Tektronix) was noisy and it spoiled the experience a bit. It is still possible to do good work with a noisy scope but I don't think I'd want to buy another one. Especially if it as noisy as that Rigol.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf