Author Topic: How often do we need to have the oscilloscope and function generator calibrated?  (Read 12759 times)

0 Members and 8 Guests are viewing this topic.

Offline eeguyTopic starter

  • Regular Contributor
  • *
  • Posts: 181
  • Country: us
In case of the GDS-2204E you can do the adjustment yourself using SPC (signal path compensation) and self calibration. I think some calibration companies also calibrate (check) GW Instek scopes. If the calibration fails (outside specification) the scope is likely broken but fortunately the GDS2000 series is covered by a limited lifetime warranty for the first owner; the warranty ends 5 years after the production of the model stops.

That sounds good!  Anybody knows if Keysight's 2000X, 3000XT oscilloscopes have such function?
 

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22384
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
I dare say "signal path compensation" is a bug disguised as a feature.  My Tek TDS460 has it, which given its other drawbacks, should lend support to such an hypothesis...

That is to say, the thing being compensated, is something that can be solved by design, but for whatever reason, they didn't solve it.  The compensation should be done on a regular basis, say, every time the instrument is turned on, once it reaches normal operating temperature.  Or whatever the manual recommends.

Which brings up a relevant point.  This entire thread can basically be answered: Read The Freaking Manual!

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: tautech

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
What I was trying to ask is whether cheaper oscilloscopes (e.g. DS1054z and GDS-2204E) have higher chance to require re-calibration due to the use of cheaper components than those Keysight 2000-3000T ones.

No, they don't. The price tag of the components isn't relevant. Pretty much all somewhat modern scopes are all highly integrated designs which have shown to perform very well in terms of long-term spec compliance. There simply isn't a lot that shifts working parameters with age enough for the scope to deviate from its original specs, and for those components that do most scopes come with a built-in self calibration which uses a built-in reference to establish signal path correction factors.

These days, if you send a scope in for calibration, and it fails, it usually means that the scope is defective and needs repair.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17117
  • Country: us
  • DavidH
I dare say "signal path compensation" is a bug disguised as a feature.  My Tek TDS460 has it, which given its other drawbacks, should lend support to such an hypothesis...

That is to say, the thing being compensated, is something that can be solved by design, but for whatever reason, they didn't solve it.  The compensation should be done on a regular basis, say, every time the instrument is turned on, once it reaches normal operating temperature.  Or whatever the manual recommends.

It is not an easy problem; look at how many balance adjustments are used in old analog designs and those adjustments are not even for temperature compensation which is fixed.  In extreme cases like the Tektronix 7A13, *three* of the balance adjustments are accessible to the user although only one (variable balance) is usually needed.  As soon as it was feasible, microprocessor control was used to implement automatic calibration in oscilloscopes like the 2247A and 2465 series.  With older oscilloscopes, you just lived with it and took it into account so automatic calibration was a real improvement.

First you have offset voltage change from the high impedance input buffer.  Then the following transconductance (or gain) stage has its own offset voltage change which is not lowered by the input buffer voltage gain because there is none.  Since that stage has voltage gain, the following offset voltage changes are not as important however any cascodes have beta mismatch which contributes another offset error no matter where they are.  If a paraphase amplifier is used for variable gain or inversion, then there is another set of balance adjustments.

Matched devices in integrated form would seem to solve all of these problems because of close matching and temperature tracking however parasitic capacitance between closely integrated devices is detrimental to high frequency performance so this may cause more problems than it solves.  Even when monolithic matched pairs became available, discrete or hybrid matched pairs had to be used and this is still the case.  Further, thermal feedback from integrated output stages back to the input devices causes settling time problems.  Old designs often used discrete offset voltage matched pairs for transconductance inputs and beta matched pairs for cascodes and they *still* needed balance adjustments.

External DC balancing via a servo loop is also generally not an option because if a stage overloads, the integration constant winds up completely destroying overload recovery.  Well designed input stages can get away with this if the input protection circuits clamp the input at a level preventing overload and sometimes overload clamping is included in later stages making this feasible there as well.
 

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22384
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Yabbut...

All of those problems are neatly solved by using an integrated front-end IC.  Back in the Analog Days, they had to make do with transistors (at first discrete, later in hybrids and monolithics) that were just fast enough (fT ~ GHz).  Nowadays, ICs contain transistors pushing fT ~ 60GHz.  Offsets are matched away, while enough excess gain-bandwidth is available to use feedback to swamp anything else.  Then it's immediately onto the ADC, which does its job in lockstep fashion, so aside from INL (which is very complicated to calibrate out), everything is very exact.

I suspect the compensation refers to timing constraints between various parts of the system (for example, clock generation and trigger).  But I haven't seen it described anywhere, and it likely covers a wide range of things specific to each manufacturer's design.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17117
  • Country: us
  • DavidH
All of those problems are neatly solved by using an integrated front-end IC.

I gave an example of why this will not work; features that take advantage of integration like interdigitated structures and linearized cross coupled quads are not suitable at high frequencies because of parasitic coupling.  The best that can be done is symmetrical layouts which will not help if there are multiple inputs or multiple outputs because of thermal coupling between the two which is something that hybrids and discrete circuits handle better; this is one reason why unloading the output of an operational amplifier increases its open loop gain and linearity.  This limits the benefits of transistor matching in integrated processes in this application.  Now the matching is good and usually better than hybrid or discrete matching and free but automatic calibration still improves performance.

Check out the datasheets for the various integrated oscilloscope amplifiers like the LMH6518; they can have output offset voltages up to about 5% of the full scale output and the LMH6518 datasheet does not even specify a maximum output offset voltage drift.  One way or another, modern DSOs which use integrated amplifiers are doing automatic calibration whether the user knows it or not.

And the errors from the remaining non-integrated front end circuits still exist.

Quote
I suspect the compensation refers to timing constraints between various parts of the system (for example, clock generation and trigger).  But I haven't seen it described anywhere, and it likely covers a wide range of things specific to each manufacturer's design.

Why would there be any constraints between clock generation and triggering?  Digital triggering only cares that the sample clock is clean (and that the signal is without aliasing) and analog triggering either does or does not use jitter correction depending on the sophistication of the design.
 

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22384
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Why would there be any constraints between clock generation and triggering?  Digital triggering only cares that the sample clock is clean (and that the signal is without aliasing) and analog triggering either does or does not use jitter correction depending on the sophistication of the design.

Drift in FR-4, I suppose.  Probably not as big a deal now that real-time sampling is pervasive, but that kind of thing would matter much more to a 100MSps scope when the big fat "50GS ET" is displayed on the screen.

Which still doesn't make much difference, but perhaps unmatched delays between ADCs operated interleaved could apply.  (Which actually isn't even the case for my TDS460, which has full sample rate on each channel.  So maybe I should just shut up. :P )

But like I said, I haven't seen an article about it.  Did HP build anything with that sort of a function?  Did they write it up in a journal article?  That'd be nice...

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline Keysight DanielBogdanoff

  • Supporter
  • ****
  • Posts: 788
  • Country: us
  • ALL THE SCOPES!
    • Keysight Scopes YouTube channel
In case of the GDS-2204E you can do the adjustment yourself using SPC (signal path compensation) and self calibration. I think some calibration companies also calibrate (check) GW Instek scopes. If the calibration fails (outside specification) the scope is likely broken but fortunately the GDS2000 series is covered by a limited lifetime warranty for the first owner; the warranty ends 5 years after the production of the model stops.

That sounds good!  Anybody knows if Keysight's 2000X, 3000XT oscilloscopes have such function?

You can do a self-cal, but it will invalidate any official calibration. The data sheet will give a specification for the calibration cycle for that piece of equipment. One of the things that our team has been working on is increasing the life of the calibration on our scopes.

For example, the 3000XT has a 3-year calibration cycle, and the 2000X has (I think) a 2-year cycle.

 
The following users thanked this post: bitseeker

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17117
  • Country: us
  • DavidH
Why would there be any constraints between clock generation and triggering?  Digital triggering only cares that the sample clock is clean (and that the signal is without aliasing) and analog triggering either does or does not use jitter correction depending on the sophistication of the design.

Drift in FR-4, I suppose.  Probably not as big a deal now that real-time sampling is pervasive, but that kind of thing would matter much more to a 100MSps scope when the big fat "50GS ET" is displayed on the screen.

The trigger to sample offset only has to be stable for equivalent time sampling.  Automatic calibration of gain and zero and sometimes linearity is done on the interpolator but only because that is more economical than having absolute accuracy.

Quote
Which still doesn't make much difference, but perhaps unmatched delays between ADCs operated interleaved could apply.  (Which actually isn't even the case for my TDS460, which has full sample rate on each channel.  So maybe I should just shut up. :P )

Interleaving applies to both real time and equivalent time sampling; it just depends on how the digitizer was designed.  I know faster discrete designs calibrate the interleave timing but this may be done manually during a periodic calibration or automatically.  In my experience, manual calibrated interleaving does not drift much and it is typically only checked during calibration.

The TDS400 series does not use interleaving anywhere because 100 MS/s flash converters were readily available and economical at the time and one was dedicated to each channel.  They had been around since at least 1990 when Tektronix was using the AD9002 made by Analog devices for the later 22xx series DSOs.

Quote
But like I said, I haven't seen an article about it.  Did HP build anything with that sort of a function?  Did they write it up in a journal article?  That'd be nice...

Tim

They might have but details for this sort of thing seem to be treated as trade secrets.  HP has a white paper about their random interleaved sampling technology which might mention something.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf