Author Topic: combination of uncertainties  (Read 3634 times)

0 Members and 5 Guests are viewing this topic.

Offline e61_philTopic starter

  • Frequent Contributor
  • **
  • Posts: 963
  • Country: de
combination of uncertainties
« on: December 20, 2018, 04:59:50 pm »
I often heard from a couple of volt-nuts, that they believe their 10V are exact, because they compared it to different meters and the readings are equal within let's say one ppm. Although the meters have uncertainties much higher than 1ppm.

I wonder how likely it is, that two independently calibrated meters agree in the reading and what does that mean for the uncertainty of the measurement.

My situation is the following: I have access to an calibrated Fluke 8508A (calibrated at Fluke, traceable to NPL) and two HP 3458A (calibrated by a local cal lab, traceable to PTB). All meters agree within 1ppm. The question is: How uncertain is the 10V with this readings?

To evaluate that I build a very simple monte-carlo simulation which simulates the calibrations of the two meters (the second 3458A doesn't help here, because it isn't independently calibrated).

The idea is to use normal distributed random numbers with a mean of 10V (the absolute values doesn't matter. I choose 10 for simpler comprehension).

A first test to verify the assumptions is to check if the random numbers will agree with the specifications of the 8508A. Again, the code isn't optimized or beautiful it is done for comprehension.

Code: [Select]
mean = 10
uc      = 3.4e-6 #uncertainty of 95% confidence interval
k       = 2
sigma = 1/k * uc * mean

samples = np.random.normal(mean, sigma, 1000000)

count_95 = 0
count_99 = 0

for sample in samples:
    if np.abs( (sample-10)/10 ) <= 3.4e-6:
        count_95 += 1

    if np.abs( (sample-10)/10 ) <= 4.5e-6:
        count_99 += 1

print(count_95/len(samples)*100)
print(count_99/len(samples)*100)

The 8508A is specified with 3.4ppm for 10V 95% and 4.5ppm 99% (1year spec).

The result of this short test:
Code: [Select]
95.45
99.19

That is exactly what we expect.


The next step would be to create a second distribution which represents the 3458A. Let's assume I directly compare the 3458A after calibration and we use the calibration uncertainty of 4.2ppm (this is what the local cal lab gives). Now we can take many full "calibrations" (I took one million) and keep all calibrations which agree within 1ppm.

Code: [Select]
mean = 10
uc   = 1e-6
sigma = 1/2 * uc * mean

equal = []
runs   = 1000000

for i in range(0, runs):
    meas = []
    meas.append( np.random.normal(mean, sigma*3.4, 1)[0] )
    meas.append( np.random.normal(mean, sigma*4.2, 1)[0] )
    meas = np.array(meas)

    if np.abs( (np.max(meas)-np.min(meas))/np.mean(meas) *1e6 ) <= 1:
        equal.append(np.mean(meas))

At first one can count how many calibrations will deliver equal readings. That is in 29% of the calibrations the case. That means it isn't that unlikely that one will get the same readings with such uncertainties.

These equal readings are again gausian distributed. Therefore, we can calculate the standard deviation and a 95% confidence intervall to get our expanded uncertainty.

The result in this case is: 2.65ppm 95%

Conclusion: To get equal readings will improve the uncertainty, but it doesn't proove that the reading is exact.

This example only works with two completely independend calibration paths. If there is some dependency the equal reading is much more likely and the resulting uncertainty is also higher.


Anything I overlooked?
« Last Edit: December 20, 2018, 05:30:19 pm by e61_phil »
 
The following users thanked this post: TiN

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14736
  • Country: de
Re: combination of uncertainties
« Reply #1 on: December 20, 2018, 05:20:19 pm »
The meters may use the same type of reference (e.g. LTZ1000), as there are not that many different references to choose from (essentially one the LTZ and LTFLU) at the high end. These references have a usual common direction of the drift. So the drift part of the uncertainty can be correlated. So the calculation could be OK for the short term, but could be a little too optimistic for the longer time (e.g. 1 year) specs.

With a lower grade instrument, there might be  a similar effect from the calibrators used, though at a low level. At the high end with an 3458 and 8508 I would expect the calibrators use to already use a known drift history to correct for the expected drift. This may not be the case at a lower grade lab.

 
The following users thanked this post: TiN, e61_phil

Offline e61_philTopic starter

  • Frequent Contributor
  • **
  • Posts: 963
  • Country: de
Re: combination of uncertainties
« Reply #2 on: December 20, 2018, 05:25:54 pm »
The meters may use the same type of reference (e.g. LTZ1000), as there are not that many different references to choose from (essentially one the LTZ and LTFLU) at the high end. These references have a usual common direction of the drift. So the drift part of the uncertainty can be correlated. So the calculation could be OK for the short term, but could be a little too optimistic for the longer time (e.g. 1 year) specs.

Very interesting point! Thanks.
I wanted to show that equal readings doesn't mean that the value is exactly true.
First I thought the drift is completely included in the specs, but if it isn't random than everything is even worse :)

Seems like the 2.65ppm here are really the best case.
 

Offline Andreas

  • Super Contributor
  • ***
  • Posts: 3296
  • Country: de
Re: combination of uncertainties
« Reply #3 on: December 20, 2018, 06:35:28 pm »
Hello,

did you regard the uncertainity of calibration also or only the uncertainity of the instrument?

often the 95% uncertainity of calibration is also around 2.7 ppm.

I think you can get only better with several calibrations when you can determine the drift rate for your instrument.

with best regards

Andreas
 

Offline e61_philTopic starter

  • Frequent Contributor
  • **
  • Posts: 963
  • Country: de
Re: combination of uncertainties
« Reply #4 on: December 20, 2018, 06:42:01 pm »
Hi Andreas,

did you regard the uncertainity of calibration also or only the uncertainity of the instrument?

The Fluke specifaction includes everything. The uncertainty of calibration was 0.7ppm for 10V for the 8508A.

For the 3458A I used the calibration uncertainty and assumed I compare both meters directly after the calibration of the 3458A. In reality the 3458A uncertainty would be a little bit higher.


I think you can get only better with several calibrations when you can determine the drift rate for your instrument.

I think that is difficult without very low calibration uncertainties. Our 3458As are calibrated with 4.2ppm uncertainty. This is already higher than the yearly drift of the meters (one is equipped with option 2).

The idea was to combine as much independed sources for 10V to reduce the uncertainty this way.
« Last Edit: December 20, 2018, 06:46:59 pm by e61_phil »
 

Offline GregDunn

  • Frequent Contributor
  • **
  • Posts: 725
  • Country: us
Re: combination of uncertainties
« Reply #5 on: December 20, 2018, 07:40:59 pm »
Thanks for doing this analysis.

How much more complex is this if you have the following:

1) several meters with different accuracy specifications;
2) several "standards" which have been "calibrated" against different meters with different specifications?

I have both, obviously, and it's interesting to note that each meter seems to be offset from the alleged standard output by a very similar amount: e.g., my 8800A #1 reads low on standard #1 by a certain amount, and my 8800A #2 reads low on standard #1 by a different amount.  When I switch to standard #2, both 8800A units are off by a different amount, yet the difference in readings between the two 8800As is near identical for each standard.  Is this only telling me that my 8800As are fairly stable with respect to each other?  Neither measurement is outside the 8800A spec if the "standard" is assumed to be correct.

Is there any value to knowing the measurements of each standard by each meter, without knowing the uncertainties of the standards?  I know the reference chips used in each one, but only a single measurement taken by an allegedly calibrated and named meter and no actual long term stability values.

I know that measurement accuracy to the limits of my 5-1/2 digit precision is not an achievable goal with the budget I have, but I am curious as to whether I can draw any useful conclusions at all with the equipment I have.  If it's pertinent I can describe the complete list of devices, but I'm willing to calculate it myself and share the results if there is anything to be gained by an aggregate of the data.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5427
  • Country: us
Re: combination of uncertainties
« Reply #6 on: December 20, 2018, 08:08:12 pm »
In principal you can improve the uncertainty with more sources/standards.  But you need to know the actual distributions for each of the sources.  The gaussian assumption and independence is widely used because there is some justification and the math is much simpler, but as earlier posts point out those assumptions don't hold true in many real cases. 

The practical answer is that it takes an enormous number of low accuracy standards to improve the estimate to anything close to the two or three order of magnitude difference between the lower accuracy standards you mention and what you get without resorting to statistical tricks.
 
The following users thanked this post: GregDunn

Offline Conrad Hoffman

  • Super Contributor
  • ***
  • Posts: 1987
  • Country: us
    • The Messy Basement
Re: combination of uncertainties
« Reply #7 on: December 20, 2018, 08:29:56 pm »
IMO, there's a difference between published uncertainties and drifts of new instruments and what a specific instrument does after it has some years of history behind it. An instrument that's had years to settle down, a history of calibrations and happens to live in a very stable environment, might be an order of magnitude better than the specs. If you take specs as the only thing you can trust, working at 1 ppm will be nearly impossible.
 
The following users thanked this post: TiN, GregDunn

Offline e61_philTopic starter

  • Frequent Contributor
  • **
  • Posts: 963
  • Country: de
Re: combination of uncertainties
« Reply #8 on: December 20, 2018, 09:42:36 pm »
IMO, there's a difference between published uncertainties and drifts of new instruments and what a specific instrument does after it has some years of history behind it. An instrument that's had years to settle down, a history of calibrations and happens to live in a very stable environment, might be an order of magnitude better than the specs. If you take specs as the only thing you can trust, working at 1 ppm will be nearly impossible.

I doubt that anything will improve by an order of magnitude over time.

And I also think it is much harder to work at 1ppm than many people think. Many Cal Labs haven't a 10V uncertainty of 1ppm. And they use Fluke 732X which are directly calibrated by an NML.
« Last Edit: December 20, 2018, 09:44:47 pm by e61_phil »
 

Offline e61_philTopic starter

  • Frequent Contributor
  • **
  • Posts: 963
  • Country: de
Re: combination of uncertainties
« Reply #9 on: December 20, 2018, 09:44:22 pm »
How much more complex is this if you have the following:

1) several meters with different accuracy specifications;
2) several "standards" which have been "calibrated" against different meters with different specifications?

I think it is quite complex, because you don't have enough information to model that correctly. But perhaps I'm completely wrong ;)
 
The following users thanked this post: TiN

Offline GregDunn

  • Frequent Contributor
  • **
  • Posts: 725
  • Country: us
Re: combination of uncertainties
« Reply #10 on: December 20, 2018, 10:57:07 pm »
"Not enough information" is an entirely acceptable answer.   ;)  I know this is a minefield, and I'm just trying to avoid stepping on one by making un-justified assumptions.  Thanks to all for sharing their thoughts.
 
The following users thanked this post: TiN

Offline TiN

  • Super Contributor
  • ***
  • Posts: 4543
  • Country: ua
    • xDevs.com
Re: combination of uncertainties
« Reply #11 on: December 21, 2018, 12:28:06 am »
Quote
The idea was to combine as much independed sources for 10V to reduce the uncertainty this way.
Without characterization of those sources, no amount of "averaging" would really give you confidence.

Quote
I doubt that anything will improve by an order of magnitude over time.
Can put it other way - things can worsen by magnitude over time. E.g. meters start drifting after humidity surge event or whatnot.
This is where having stack of different meters is helpful though, even if calibrated vs same source - to detect the meters that go rogue and don't match predicted measurement anymore.

With a leap of faith (  :popcorn:) you could in theory apply some prediction algo , with method similar to Fluke's 732 characterization to achieve "expected" vs "measured" data sets on meters. After N years (where N is >3) you may be to reduce uncertainty of your meter 10V, if the calibration good enough. But then again, you need four meters to guardband against accidents (drifty A3, that just got more expensive, cough cough).
YouTube | Metrology IRC Chat room | Let's share T&M documentation? Upload! No upload limits for firmwares, photos, files.
 

Offline Moon Winx

  • Regular Contributor
  • *
  • Posts: 83
  • Country: us
Re: combination of uncertainties
« Reply #12 on: December 21, 2018, 02:57:07 am »
As an extreme example, say you had two JVS systems and measure the difference between the 10V outputs. Your uncertainty for the comparison would dominated by the noise of the readings from your detector/voltmeter that measures this difference. Would that give you a better uncertainty of a single JVS or worse? If you never checked, you would assume this difference would be 0 and never would be included in the budget for a single JVS. But, knowing there is a difference, you would have to include it, right?

It's like the old metrology joke of "if you have one clock, you know what time it is. If you have two, you are unsure."
 
The following users thanked this post: TiN

Offline TiN

  • Super Contributor
  • ***
  • Posts: 4543
  • Country: ua
    • xDevs.com
Re: combination of uncertainties
« Reply #13 on: December 21, 2018, 04:00:40 am »
Moon have very good point. Supracon does not have solid specifications on uncertainty either, only listed "typical" parameters, that e61_phil likes to avoid much  :-+.



Interesting to note gain calibration acccuracy for the DMM calibration, versus the JVS system = whole juicy 2 ppm at 10V.

Actual comparison between LHe system and cryocooler, from Supracon site:

.

On that level it becomes a trickland of math and statistics. After all this is how whole customer-tier calibration business runs, on assumptions that standards and references have high probability to be in spec, between the actual calibration cycle points. It's only question how deep you want to have that assumption verified between (and what your customer ready to pay for).

P.S. this system spec is for older generation 75GHz JJA with 2182A as detector.
« Last Edit: December 21, 2018, 04:06:05 am by TiN »
YouTube | Metrology IRC Chat room | Let's share T&M documentation? Upload! No upload limits for firmwares, photos, files.
 
The following users thanked this post: Moon Winx

Offline e61_philTopic starter

  • Frequent Contributor
  • **
  • Posts: 963
  • Country: de
Re: combination of uncertainties
« Reply #14 on: December 21, 2018, 06:52:52 am »
Quote
The idea was to combine as much independed sources for 10V to reduce the uncertainty this way.
Without characterization of those sources, no amount of "averaging" would really give you confidence.

Can't follow you here. You think if your NML gives you a calibration with a given uncertainty you can't trust them??




Quote
I doubt that anything will improve by an order of magnitude over time.
Can put it other way - things can worsen by magnitude over time. E.g. meters start drifting after humidity surge event or whatnot.
This is where having stack of different meters is helpful though, even if calibrated vs same source - to detect the meters that go rogue and don't match predicted measurement anymore.
Getting things worse doesn't mean getting things the same amount better is possible.

To clarify that: I'm not talking about meter specifications here. I'm talking about calibrations and their uncertainties. As Kleinstein already said your stuff in your lab will not be fully independend. So strictly the example above is more or less only valid if you have a 10V standard which never drifts or has any other dependencies. For every other case the numbers are WORSE not better!


With a leap of faith (  :popcorn:) you could in theory apply some prediction algo , with method similar to Fluke's 732 characterization to achieve "expected" vs "measured" data sets on meters. After N years (where N is >3) you may be to reduce uncertainty of your meter 10V, if the calibration good enough. But then again, you need four meters to guardband against accidents (drifty A3, that just got more expensive, cough cough).


Again: We are not talking about the stuff in your lab. The question was how accurate are the 10V from different sources if the reading agrees. The calibration uncertainties in the Fluke paper you mentioned are all lower than the resulting uncertainty, even with their prediction. That is my point.


As an extreme example, say you had two JVS systems and measure the difference between the 10V outputs. Your uncertainty for the comparison would dominated by the noise of the readings from your detector/voltmeter that measures this difference. Would that give you a better uncertainty of a single JVS or worse? If you never checked, you would assume this difference would be 0 and never would be included in the budget for a single JVS. But, knowing there is a difference, you would have to include it, right?

I would say it doesn't matter if you have one or two JVS (if they are ok), because the quantum volt itself is exact by definition. And as you already said the uncertainty comes from the measurement itself. Therefore, it shouldn't make a difference in the numbers if you compare your DUT 100 times against JVS1 and 100 times against JVS2 or 200 times against only one of the JVS.


Moon have very good point.

What is the point?


Supracon does not have solid specifications on uncertainty either, only listed "typical" parameters, that e61_phil likes to avoid much  :-+.

I always try to avoid using typical specs because it is purely marketing speech. What does typcial mean? How many std. deviations are typical? If you can answer that you can calculate with "typical" numbers. But if you have that information you can also calculate confidence intervals like other companies do.

The term typical isn't defined and everybody will use it in a other way. For my daily work the typical values of electronic parts will only give some certainty, that selection might work if I can't find a better solution. But normally you don't get any hint of the yield of such a selection.
« Last Edit: December 21, 2018, 09:37:43 am by e61_phil »
 
The following users thanked this post: TiN

Offline TiN

  • Super Contributor
  • ***
  • Posts: 4543
  • Country: ua
    • xDevs.com
Re: combination of uncertainties
« Reply #15 on: December 21, 2018, 10:06:18 am »
Quote
NML gives you a calibration with a given uncertainty you can't trust them??
Calibration uncertainty does not imply stability. You get 10V uncertain to x ppm at time of calibration only. To determine uncertainty 5 days after, or 50 days after the calibration need to have measured and known stability of your DUT and methods. Having calibration obviously says nothing if DUT will meet it's 30 day/90 day/1 year specifications either (typical or not).  :scared:

Quote
Getting things worse doesn't mean getting things the same amount better is possible.

The question was how accurate are the 10V from different sources if the reading agrees. The calibration uncertainties in the Fluke paper you mentioned are all lower than the resulting uncertainty, even with their prediction. That is my point.

Of course, never said otherwise, quite the opposite. I'd go further and say that agreement between meters _outside of calibration_ does not help for measurement uncertainty either.

Prediction from characterized references model, like Fluke does in 732 tests help to establish uncertainty error between the calibration points, so you can have some confidence that standards are in spec still, without yet sending them to calibration. Same goes to ACAL-derived points by the way, you can use it as prediction of stability, between the external calibration points with assigned uncertainty from cal lab. Sometimes it is important, for a case when 1 year specifications for the instrument are not good enough, but one cannot afford shorter calibration cycle. EDIT: this is great area here, when you can apply "typical" spec to your product, when measured by reference DMM between it's CAL cycle dates. E.g. source 10V have typical *relative* accuracy 2.5 ppm , but guaranteed spec is 7ppm (4.5ppm of reference meter + uncertainty of NML cal).

Maybe we can do some practical application as example? Sorry, but this is more of theory talk, without actual numbers in question.

No different than a quiz like:

Meter A agree to meter B within 1ppm. Meter A was calibrated 193 days ago with 10V uncertainty 2.5 ppm.
Does this mean that meter B is (+/-2.5 ppm +/-1ppm + transfer spec) uncertain on 10V ?    :-DMM

Answer in my books: No, not at all.  :-\
Because we don't know if meter A still +/-2.5 ppm from NML or maybe it drifted away +10 ppm by the time of comparison meter B, while meter B had +9 ppm error from the start. So meter A - B = 1ppm different means nothing without the history/stability data on BOTH units. Sure it's not likely, but this assumption is as good , as any.  :horse:

But if I have meter A, meter B, meter C, meter D all measured together before the cal, and confirmed stability 2ppm/year, then I send meter A to get +/-2.5ppm uncertain NML cal, then confirm meter B,C,D to agree with returned meter A and confirmed it's still stable 2ppm/year, this will give confidence that meter A probably is still +/-2.5ppm + drift error uncertain at 10 V. Here as result absolute calibration and uncertainty of meters B,C,D is irrelevant and not important, only their stability is. I'm sure somebody in metrology field can correct me on this.

As an extreme example, say you had two JVS systems and measure the difference between the 10V outputs. Your uncertainty for the comparison would dominated by the noise of the readings from your detector/voltmeter that measures this difference. Would that give you a better uncertainty of a single JVS or worse? If you never checked, you would assume this difference would be 0 and never would be included in the budget for a single JVS. But, knowing there is a difference, you would have to include it, right?

Quote
I would say it doesn't matter if you have one or two JVS (if they are ok), because the quantum volt itself is exact by definition
Point about JVS was that transfer from quantum voltage is not exact too, and can never be, your comparison uncertainty is never zero. So you have to include this into final result too. Because of that, the JVS system cannot have zero uncertainty in the specification. This is small difference for the practical uses, but it's there.  :bullshit:

Quote
I always try to avoid using typical specs because it is purely marketing speech. What does typical mean? How many std. deviations are typical? If you can answer that you can calculate with "typical" numbers. But if you have that information you can also calculate confidence intervals like other companies do.
I read typical as "this is how test sample lot behaved, but your mileage may vary". You can obtain better spec once you request additional validation from manufacturer/NML for your particular specimen, or worse spec is you just get outlier unit.  [emoji14]

So if you keep this in mind, there is some value of the typical spec. But like you said, your definition is different, and if you go only by maximum guaranteed specs (which is not always possible/suitable) - no problems with that too. I wouldn't say typical specs are the evil anyway, if there is a choice to have nothing at all or the typical spec - I'll take that and do own tests. One is free to ignore any typical specs , as we please. Different manufacturer or even different products from same manufacturer can have very different backing dataset for typical spec too. Just like 3458A's typical INL 0.05ppm. Lot of effort was put into this number, so as result most (but not all) units can do this INL. Or on other end of scale....typical 0ppm TCR of VPG BMF's. Yea, sure, maybe for 1 sample out of 100 pcs... :-D
« Last Edit: December 21, 2018, 10:56:39 am by TiN »
YouTube | Metrology IRC Chat room | Let's share T&M documentation? Upload! No upload limits for firmwares, photos, files.
 
The following users thanked this post: e61_phil

Offline e61_philTopic starter

  • Frequent Contributor
  • **
  • Posts: 963
  • Country: de
Re: combination of uncertainties
« Reply #16 on: December 21, 2018, 12:28:02 pm »
We are talking about completely different things here. You are talking about stability and confidence in stability of your lab equipment and I'm talking about calibration uncertainty only.

All the things you said will add up to my example above. That was meant by it can only get worse. You cannot be better than the calibration, you cannot even reach the calibration uncertainty, because of the things you already described here.


One word to the JVS: I thing we agree here. I said the JVS itself doesn't matter. It is the transfer which brings the uncertainty. Therefore, it doesn't matter how many JVS you have.



Guaranteed specs: In electronics there are often guaranteed specs. The part will be measured by the manufacturer and sorted out if it doesn't fit. But here the specifications are build on statistics. There is no gurantee for 4ppm/a for the option 2 3458A. But there is a number which can be used for calculation (k=2). If anybody does many measurements then he should be able to give such numbers. But these numbers often doesn't look that nice. Therefore, the marketing will decide printing "typical" numbers. If we are better than the competitor our typical values may contain 3 sigma. If we are not so good only 1 sigma is included. As long as nobody knows what exactly "typical" means it is completely useless. It only means that there are units which reach this spec.

How many 3458A will have 0.05ppm INL? 99% 80%? 1% Or three units out of the engineering batch of 100 units which were tested during development? It says nothing. If you have a real specification which says 0.05ppm INL on whatever 1.3 sigma, than you are able to calculate with such values.

Edit: Oh, I overlooked your Vishay example. That is a very good example  :-DD
« Last Edit: December 21, 2018, 12:42:27 pm by e61_phil »
 

Offline TiN

  • Super Contributor
  • ***
  • Posts: 4543
  • Country: ua
    • xDevs.com
Re: combination of uncertainties
« Reply #17 on: December 21, 2018, 01:17:20 pm »
Alright, maybe I got mislead from your first post:

Quote
how likely it is, that two independently calibrated meters agree in the reading and what does that mean for the uncertainty of the measurement.

My situation is the following: I have access to an calibrated Fluke 8508A (calibrated at Fluke, traceable to NPL) and two HP 3458A (calibrated by a local cal lab, traceable to PTB). All meters agree within 1ppm. The question is: How uncertain is the 10V with this readings?

Answer to which would be - it's all bogus guess work, and seeing meters agree within 1 ppm means nothing, as my example shown, unless you know stability and history of all three units. I don't see how knowing calibration uncertainty is any help in this case, as it is different matter than your question.   ???

As result the following simulation is just math and programming gymnastics otherwise, and

Quote
Conclusion: To get equal readings will improve the uncertainty, but it doesn't proove that the reading is exact.

based on guesswork. Equal readings not improving anything in uncertainty of the measurement (error from the SI unit, that is).  :-//
« Last Edit: December 21, 2018, 01:20:51 pm by TiN »
YouTube | Metrology IRC Chat room | Let's share T&M documentation? Upload! No upload limits for firmwares, photos, files.
 

Offline e61_philTopic starter

  • Frequent Contributor
  • **
  • Posts: 963
  • Country: de
Re: combination of uncertainties
« Reply #18 on: December 21, 2018, 01:30:25 pm »
Ok, sorry. That was missleading. My whole point was: You cannot be sure your 10V is excat only because your measurements agree. I think we totally agree in that point.

The idea was if one is able to combine different calibrations to gain lower uncertainties than with a single of that calibrations. And I still think that is the case. But one have to add the local instruments to that calibration uncertainties as you described.
 
The following users thanked this post: TiN

Offline splin

  • Frequent Contributor
  • **
  • Posts: 999
  • Country: gb
Re: combination of uncertainties
« Reply #19 on: December 21, 2018, 02:35:51 pm »



Interesting to note gain calibration acccuracy for the DMM calibration, versus the JVS system = whole juicy 2 ppm at 10V.

Anyone suggest why a meter can only be calibrated to 2ppm against a .01ppm source? I can't think of anything other than noise - unless they are referring to adjusting a meter to 10V.
 

Offline Dr. Frank

  • Super Contributor
  • ***
  • Posts: 2425
  • Country: de
Re: combination of uncertainties
« Reply #20 on: December 21, 2018, 04:20:10 pm »



Interesting to note gain calibration acccuracy for the DMM calibration, versus the JVS system = whole juicy 2 ppm at 10V.

Anyone suggest why a meter can only be calibrated to 2ppm against a .01ppm source? I can't think of anything other than noise - unless they are referring to adjusting a meter to 10V.

The JJA can not be specified in terms of uncertainty, because that depends completely on the DUT.
In other words, the uncertainty specification has to be defined by / with the DUT manufacturer.

Noise, and stability figures of the DUT play a role, and of course, how precisely / stable / uncertain the calibration transfer process operates.

In case of a DMM, this transfer is internal to the DVM, e.g. 0.1ppm for the 3458A (derived from its 10min transfer accuracy).
hp also has 'limited' the absolute uncertainty to about 2ppm, similar Flukes 8508A, call that artificial, or historic. Maybe that can be done much better.

In case of voltage references, like 732A/B, 4910, 7001, or our DIY LTZ1000s, this a different story, at first the transfer is external (specified by the 4nV error of the SupraCon Null-instrument), but also by the cables, the (Pomona) jacks used, and the internal bunch of thermo-couples.

Also mention, that these DVMs specify 'accuracy' or 'uncertainty', relative and absolute, but references (732B et al) specify 'stability' only, but no uncertainty!

Maybe that subtle difference is the reason for the different opinions on the subject, here in this thread.

I propose to first define terms and methods better, for a common understanding.

Frank
 
The following users thanked this post: e61_phil, msliva


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf