Author Topic: Uncertainty Budgets and Decision Rules  (Read 16499 times)

0 Members and 1 Guest are viewing this topic.

Offline mendip_discoveryTopic starter

  • Frequent Contributor
  • **
  • Posts: 998
  • Country: gb
Re: Uncertainty Budgets and Decision Rules
« Reply #25 on: August 09, 2021, 10:33:59 pm »
Now time for the Budget,

Now all you need to is bring in your info into the budget. This again isn't that mental, the hard bit is working out what kind of probability you are bringing in. All you are doing is Dividing by 2 or 1 or by the square root(3). Then you =SQRT(SUMSQ(YOUR_READINGS)) then times that by two to give you the +/-. Then to finish it off you round up to a nice number.


*note please ignore the 5.000 in the mV next to Man spec, I was tinkering.*

Its not a scary thing, but it shows the importance of getting a good lab to calibrate your stuff and getting enough data that you can stat to get away from Man Spec. The resolution just allows you to get better repeatability but only if your machine repeats.

If I have time I will post something about decision rules tomorrow.
« Last Edit: August 09, 2021, 10:38:31 pm by mendip_discovery »
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 
The following users thanked this post: e61_phil

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8010
  • Country: us
Re: Uncertainty Budgets and Decision Rules
« Reply #26 on: August 10, 2021, 12:31:15 am »
OK, I'm not quite understanding all of that so perhaps you could help with the lingo and a few other things?

'Imported uncertainty'--is that the uncertainty of your lab or the device used by your lab in this test?  Or something else?

Could you explain the terms 'RSS' and 'dominance check'?

Why do you assume a rectangular probability distribution for the manufacturer-specified uncertainties?  And if you do assume that, why would the divisor be greater than one?  I would think that a rectangular distribution would represent a larger spread than even a k=1 normal distribution, thus requiring a divisor of less than 1.   Edit:  No tails, so I guess that's not right. I was visualizing a k=2 or more distribution--a k=1 distribution has a lot outside the bars. 

I have other questions but I'd be interested in those first so I can think about it.

« Last Edit: August 10, 2021, 02:39:38 am by bdunham7 »
A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 
The following users thanked this post: Anders Petersson

Offline RYcal

  • Contributor
  • Posts: 31
  • Country: nz
Re: Uncertainty Budgets and Decision Rules
« Reply #27 on: August 10, 2021, 02:54:26 am »
Well well well

Welcome to my favorite topic....Let me start by outlining my experience in this field. I have over 15 years in accredited calibration labs in multiple disciplines including pressure, temperature and DC/LF electrical parameters. I currently own and run a DC/LF accredited cal lab and DO NOT offer any non-accredited calibrations period, I see no value in them and it devalues my lab.

There are typically two types of uncertainty budgets
1. CMC Budgets (these are what your scope of accreditation should be)
2. Uncertainty budgets for a UUT (unit under test).

In my lab we make a statement to specifications but part of this is including the measurement uncertainty into this statement i.e. If the error plus the uncertainty is greater than the spec we cant say whether it passes or fails so we have another term we state called a W.O.U (window of uncertainty) so you can either have a pass ,fail or a W.O.U. Its as simple as guardbanding via measurement uncertainty. Most of my calibrations are done by remote control of my instruments and reporting is done in the same software package (MET/CAL MET/TEAM) so all this is hard coded into my procedures.

So my CMC budget is typically how good my reference equipment is now there is two way to tackle this sensibly.
1. You could just be conservative and use manufactures specs provided that it has a calibration cert attached to it and that cert doesn't involve any W.O.U's ( if so it does you blow the spec out to account for it). Manufactures specs typically include any expected drift, resolution etc. HOWEVER I must also monitor my gear to make sure it isn't drifting out of spec between calibrations this is typically done every three months via an automated procedure that puts the data into an excel file for me to plot.
2. You could apply corrections and uncertainties from the accredited calibration report that came with your reference gear as it definitely should have one to maintain traceability. This however can get VERY messy with having to interpolate corrections and uncertainties, but it is doable. It all depends on your target audience whether is work is worth it. You will still need to drift check your reference gear using this approach as well. 

A typical UUT uncertainty budget at very minimum includes any uncertainty bought through from my reference gear, the uncertainty due to the resolution of the UUT and a type A contributor. I can break it down more if people want. Its the whole structure around the chain of traceability (A cal lab cant calibrate anything better than its own capabilities and I have seen some that apparently can  :-DD)

This approach really throws the whole 4:1 ratio out the door, say I have a 3:1 ratio then without adding anything else into the mix I can roughly say that I can afford an error of around 60% of spec with inducing a W.O.U, a 2:1 then would bring that allowable error down to around 40% (these are all guesses at this stage as I don't know the other contributors values yet) but it gives me a good idea on what to expect from a calibration I personally start to crunch numbers and look carefully at expected results at around 2.5:1 ratio.

To answer a few questions so far RSS means "root sum square" its an accepted way to combine all the uncertainties in your budget provided they are at 1 sigma the formula is as follows in excel "=sqrt(uc1^2+uc2^2+uc3^2)"  uc is the separate contributors at 1 sigma. This gives us what is a called standard uncertainty then we can just multiply that by the k factor to get it to an expanded one. I personally calculate my k factor using the welch-satterthwaite formula.

My main pieces of equipment are a Fluke 5522A and a Fluke 8588A.

I love talking uncertainties this is just the tip of the iceberg on them - I also offer consulting and contracting to labs seeking to become 17025 accredited or just uncertainty matters.
« Last Edit: August 10, 2021, 03:02:15 am by RYcal »
 
The following users thanked this post: mendip_discovery

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8010
  • Country: us
Re: Uncertainty Budgets and Decision Rules
« Reply #28 on: August 10, 2021, 03:37:08 am »
A typical UUT uncertainty budget at very minimum includes any uncertainty bought through from my reference gear, the uncertainty due to the resolution of the UUT and a type A contributor. I can break it down more if people want. Its the whole structure around the chain of traceability (A cal lab cant calibrate anything better than its own capabilities and I have seen some that apparently can  :-DD)

I would be interested in how you do it in practice.  I have additional questions for mendip_discovery that I should let him answer, but I'm sort of baffled by the treatment of the UUT manufacturer specifications.  Is there something obvious in his approach that I'm missing?

Quote
To answer a few questions so far RSS means "root sum square"

OK, got it!  Given that, I'm not clear on what the 'dominance check' is or what sums would be less than the RSS, but I'll leave that for the OP.

Quote
My main pieces of equipment are a Fluke 5522A and a Fluke 8588A.

So what does that give you in terms of your 'CMC budget', which I presume is the best uncertainties your lab is capable of?  And if you can squeak by with a TUR of 2.5:1 and a lot of guardband, what would be the most precise instruments that you would accept for calibration?  Do you have separate 10V, 1 ohm, 10k, etc standards as well?
A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 

Offline RYcal

  • Contributor
  • Posts: 31
  • Country: nz
Re: Uncertainty Budgets and Decision Rules
« Reply #29 on: August 10, 2021, 04:15:31 am »
A typical UUT uncertainty budget at very minimum includes any uncertainty bought through from my reference gear, the uncertainty due to the resolution of the UUT and a type A contributor. I can break it down more if people want. Its the whole structure around the chain of traceability (A cal lab cant calibrate anything better than its own capabilities and I have seen some that apparently can  :-DD)

I would be interested in how you do it in practice.  I have additional questions for mendip_discovery that I should let him answer, but I'm sort of baffled by the treatment of the UUT manufacturer specifications.  Is there something obvious in his approach that I'm missing?

Quote
To answer a few questions so far RSS means "root sum square"

OK, got it!  Given that, I'm not clear on what the 'dominance check' is or what sums would be less than the RSS, but I'll leave that for the OP.

Quote
My main pieces of equipment are a Fluke 5522A and a Fluke 8588A.

So what does that give you in terms of your 'CMC budget', which I presume is the best uncertainties your lab is capable of?  And if you can squeak by with a TUR of 2.5:1 and a lot of guardband, what would be the most precise instruments that you would accept for calibration?  Do you have separate 10V, 1 ohm, 10k, etc standards as well?

All very good questions.
There is nothing going into the uncertainty budget that comes from the UUT specs. I have no idea what they mean by dominance check either this is a new term to me as well.

Here's a decent read on the very basics of how I do it NOTE the "S1" formula is wrong in the article but it gets corrected in the comments. https://support.flukecal.com/hc/en-us/articles/115005666563-Implementing-ISO-IEC-17025-Measurement-Uncertainty-Requirements-in-MET-CAL-Version-9-X

I'm very conservative at the moment and my terms "loosely" reflect the manufacturers specs for my 5522A nd my 8588A in there respective classes. realistically I feel around 6.5 digit instruments is where I feel comfortable at the moment using the 8588A as a transfer standard for most test points. 
 

Offline Irv1n

  • Newbie
  • Posts: 8
  • Country: by
Re: Uncertainty Budgets and Decision Rules
« Reply #30 on: August 10, 2021, 04:53:09 am »
At first glance, a budget of uncertainty is thrown; it is undesirable to combine relative and absolute values. If the resolution of the device is 0.001 V, then calculating the uncertainty budget should be done as follows: 0.001 / 2 / sqrt (3) or 0.001 / sqrt (12). There are a couple more questions about the budget, I'll ask you later.

Added:
Not entirely clear with the temperature coefficient. Does it change by 4 degrees during the calibration process? How many degrees are you above normal calibration conditions?

Uncertainty by Type A:
This value is based on YOUR Measurement Process and Reporting Procedure. If your measurement process takes a measurement and reports the value, then you need to take only (std div)  . Otherwise, if your reported "Measured Value" is based on a mean of a specified number of repeated measurements, then yo can use std of mean. Ensure your experimental data consist of individual measurement values and not mean values!
« Last Edit: August 10, 2021, 06:12:56 am by Irv1n »
 

Offline RYcal

  • Contributor
  • Posts: 31
  • Country: nz
Re: Uncertainty Budgets and Decision Rules
« Reply #31 on: August 10, 2021, 05:11:05 am »
At first glance, a budget of uncertainty is thrown; it is undesirable to combine relative and absolute values. If the resolution of the device is 0.001 V, then calculating the uncertainty budget should be done as follows: 0.001 / 2 / sqrt (3) or 0.001 / sqrt (12). There are a couple more questions about the budget, I'll ask you later.

Correct but UUT resolution is only one factor contributing to the budget.
 

Online tszaboo

  • Super Contributor
  • ***
  • Posts: 7950
  • Country: nl
  • Current job: ATEX product design
Re: Uncertainty Budgets and Decision Rules
« Reply #32 on: August 10, 2021, 07:37:59 am »
OK, I'm not quite understanding all of that so perhaps you could help with the lingo and a few other things?
Yeah before you go: "I want this and this and this on my cal certificate", you should look up the difference between standard and accredited calibration.
If you don't specifically ask for it, then they will not give you that, because it is a lot more work, and a lot more expensive, and most of the time companies need the cal to place a checkbox in their ISO 9001 or other certificate.
 

Offline Anders Petersson

  • Regular Contributor
  • *
  • Posts: 122
  • Country: se
Re: Uncertainty Budgets and Decision Rules
« Reply #33 on: August 10, 2021, 09:03:12 am »
Yes, a 1 year cal is often done to 1 year specs.  The problem with that is the 1-year specs include a years worth of drift and typically +/- 5C worth of tempco.  [...] If you test at at just one controlled temperature and accept any value that falls within the 1-year specs, you have not assured that the meter will remain accurate for the calibration period or over the specified temperature range nor do you have any sort of confidence interval, no matter how accurate your references are.

This is my problem with calibration services. Calibrating once a year with the 1-year specs as criterion, especially if not providing hard numbers on the measured margin to those specs, that leaves the UUT possibly outside the 1-year specs for the whole year. Well the next calibration, the year after, should catch that the UUT is no longer in the 1-year specs and thus indicate that the measurements during the past year were less accurate than assumed. This makes calibration mostly useful to verify past measurements, not guarantee the performance in the coming year. In contrast, naively reading the datasheet would have you believe that a 1-year calibration schedule awards you the 1-year accuracy specs. Anyone care to comment if this weakness is well-known?
 

Offline mendip_discoveryTopic starter

  • Frequent Contributor
  • **
  • Posts: 998
  • Country: gb
Re: Uncertainty Budgets and Decision Rules
« Reply #34 on: August 10, 2021, 10:49:03 am »
Yes, a 1 year cal is often done to 1 year specs.  The problem with that is the 1-year specs include a years worth of drift and typically +/- 5C worth of tempco.  [...] If you test at at just one controlled temperature and accept any value that falls within the 1-year specs, you have not assured that the meter will remain accurate for the calibration period or over the specified temperature range nor do you have any sort of confidence interval, no matter how accurate your references are.

This is my problem with calibration services. Calibrating once a year with the 1-year specs as criterion, especially if not providing hard numbers on the measured margin to those specs, leaves the UUT possibly outside the 1-year specs for the whole year. Well the next calibration, the year after, should catch that the UUT is no longer in the 1-year specs and thus indicate that the measurements during the past year were less accurate than assumed. This makes calibration mostly useful to verify past measurements, not guarantee the performance in the coming year. In contrast, naively reading the datasheet would have you believe that a 1-year calibration schedule awards you the 1-year accuracy specs. Anyone care to comment if this weakness is well-known?

The problem is more manufacturers specify a spec, doesn't say it's 1 year, though its implied 12month frequency is good. Its only the likes of the high-end unit that they start with the 24h, 30d and 12m stuff and even then they are no guarantees it will be within that spec. The labs are just doing a periodic check and maybe adjustment if its starting to wander out of specification. If the UUT is +20% of spec then why adjust because the next year it may be +40%, if its 75% then there is an argument to adjust as that would make it possibly 140% out the next year. Part of the issue is if you take the labs Unc into account that could be 20% of the spec so if they adjust they may not be improving things at that level but at 75% they have higher confidence that it will be improving.

This is when having a reference standard in your place of work helps and you use it on a monthly basis. For example with scales I will often suggest to a customer they get a cast-iron weight and use it daily or weekly to verify the scales, that if they start to wander they can get us in to adjust them. So in the case of electronics, it can be just a case of plugging two meters into the mains and checking the measurements between them or using a resistance standard etc.

I think some people like to shift the problem of accuracy of their equipment to an external lab and don't take steps to reduce the risks and carry out internal periodic checks, these don't have to be high standard checks but they do help monitor essential equipment isn't going outside of the spec. Do a spreadsheet with the results year on year of your device, is it well within spec, if it needs adjustment each year then why is that? Do you need to take it from 12m to 6m? Is the equipment often being used outside of the temperature conditions?

OK, I'm not quite understanding all of that so perhaps you could help with the lingo and a few other things?
Yeah before you go: "I want this and this and this on my cal certificate", you should look up the difference between standard and accredited calibration.
If you don't specifically ask for it, then they will not give you that, because it is a lot more work, and a lot more expensive, and most of the time companies need the cal to place a checkbox in their ISO 9001 or other certificate.

Other than the privilege on a non-profit firm visiting you and charging an eye-watering amount to watch you measure stuff the only difference between 17025 and 9001 calibration is I have to write down my work, maths and prove it. This often means you have a worksheet and a cert. So if you have to do corrections or calculations its done there. I also have to prove that the maths and procedures work. Its more work at the end of the day but a better way to work. Its nice to have someone who knows the subject come and see your work and make comments on how you need to improve.


I am on holiday this week and I am just about to make a journey across the UK to St Neots, I will try and get back to ya all tomorrow but I never know as I am helping my mate get ready for a couple of days at a motorcycle trackday at Cadwell Park where I am the photographer (my actual qualified job).
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 
The following users thanked this post: Anders Petersson

Offline mendip_discoveryTopic starter

  • Frequent Contributor
  • **
  • Posts: 998
  • Country: gb
Re: Uncertainty Budgets and Decision Rules
« Reply #35 on: August 10, 2021, 11:11:24 am »
'Imported uncertainty'--is that the uncertainty of your lab or the device used by your lab in this test?  Or something else?

That will be the uncertainty of the equipment used to test it, so in my case its the Unc for the Transmille I have at work. You would often find that on the certificate.

Could you explain the terms 'RSS' and 'dominance check'?

The Dominance check is some thing that is just on the Unc budget I have, I have never really paid much attention to it. It basically adds up the Value Relative ppm and checks it against the k=2 to ensure its greater. Its just a sanity check I think.



Why do you assume a rectangular probability distribution for the manufacturer-specified uncertainties?  And if you do assume that, why would the divisor be greater than one?  I would think that a rectangular distribution would represent a larger spread than even a k=1 normal distribution, thus requiring a divisor of less than 1.   Edit:  No tails, so I guess that's not right. I was visualizing a k=2 or more distribution--a k=1 distribution has a lot outside the bars. 

I must admit this is where I get a little fuzzy on it. I have always known it to be Square and therefore its divided by the squareroot(3). I do suffer with the mathematics and probabilty stuff. I will take a look and see if I can find a referance in my notes to as why its done like that.

At first glance, a budget of uncertainty is thrown; it is undesirable to combine relative and absolute values.

This combination thing I think is there because it was on the template I have. Its not used, its is just there for reference.

If the resolution of the device is 0.001 V, then calculating the uncertainty budget should be done as follows: 0.001 / 2 / sqrt (3) or 0.001 / sqrt (12). There are a couple more questions about the budget, I'll ask you later.

I think you are referring to the resolution being 1/2 the LSD hence the 0.001 divided by 2. I must admit on some of the new budgets I have done for the mech stuff I have done this. I just didn't spot that I didn't do it with this. Though TBH its not a big part of the budget, its the damned spec and its +/-5lsd


Not entirely clear with the temperature coefficient. Does it change by 4 degrees during the calibration process? How many degrees are you above normal calibration conditions?

I am stating I am working at a 22C +/-4C now I may be working at 20C and during cal it only changes by +1C but the 1C isn't what I am accounting for, I am accounting for error of where I am not at 22C. If I controlled my lab to 2C or 1C then I would get better ppm. My lab at work is 22C +/-2C.


Uncertainty by Type A:
This value is based on YOUR Measurement Process and Reporting Procedure. If your measurement process takes a measurement and reports the value, then you need to take only (std div)  . Otherwise, if your reported "Measured Value" is based on a mean of a specified number of repeated measurements, then yo can use std of mean. Ensure your experimental data consist of individual measurement values and not mean values!

I showed the way I worked the standard Deviation. I just take it into a ppm of the range as that allows it to scale up and down with the range. At work, I am lucky as my repeatability is usually comparing the measuring kit I have to the generation kit I have. So I can use the same tests for both sides. There maybe other ways to do this to reduce your numbers but this is trying to be practical.
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 

Offline ITTSB.Europe

  • Contributor
  • !
  • Posts: 28
  • Country: gr
    • Industrial Test Tools Scoreboard
Re: Uncertainty Budgets and Decision Rules
« Reply #36 on: August 10, 2021, 11:53:52 am »
Most Calibration labs in UK they are a bad joke.
This conversation it is a bad joke.
Someone started a topic to advertise globally his skills?

UK has two sources within the country to buy calibrators from, and the how to use them this is given from the manufacturer.

I got from UK an Agilent multimeter, with damage at calibration circuit, it does not retain the stored as calibration value, and slightly drifts at the last two digits at DCV.
The lab tried 80 times unsuccessfully to store the calibration, they did not inform the customer about the damage, they simply print a calibration certificate and they get paid.

By me receiving the DMM as used in Greece, I did found the calibration sticker and the calibration lab.
I did contact them so to request a copy of the certificate if they had kept one, which they did.
Now I know their name and address.
While they did offer to me a copy of the calibration report, I do accuse them for lack of ETHICS and professionalism.
Such opportunists they should end up to jail.
   
Now I own three identical of that Agilent/Keysight HH DMM, two are in pristine condition and the third from UK, I am using them as three phase voltage monitor with Bluetooth and  for remote review to 7” tablet.
Therefore despite the issue the DMM this is still useful to me.
 

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8010
  • Country: us
Re: Uncertainty Budgets and Decision Rules
« Reply #37 on: August 10, 2021, 03:48:03 pm »
I must admit this is where I get a little fuzzy on it. I have always known it to be Square and therefore its divided by the squareroot(3). I do suffer with the mathematics and probabilty stuff. I will take a look and see if I can find a referance in my notes to as why its done like that.

Here's an explanation without derivations.

https://www.isobudgets.com/probability-distributions-for-measurement-uncertainty/

Clearly at least some of these are just accepted approximations.  For example the correction for the 'U-shaped distribution' will vary with its shape, approaching one as the shape changes to having all the samples on one wall or the other.  According to some chicken scratching I did while doing the this morning's crossword, the correction for a continuous rectangular distribution may well the square root of three.  That still doesn't quite explain why that is the appropriate fit for incorporating a manufacturer spec into your budget.  I also find it odd that metrology practice would assume bounded distributions.



A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8010
  • Country: us
Re: Uncertainty Budgets and Decision Rules
« Reply #38 on: August 10, 2021, 03:57:17 pm »
Most Calibration labs in UK they are a bad joke.
This conversation it is a bad joke.
Someone started a topic to advertise globally his skills?

If you reach that conclusion on the basis of one sample, your statistical skills are the bad joke.

My only real complaint with cal labs is lack of transparency--without complete data and transparency, a calibration certificate is just paper and proves nothing.  However, they do what their customers want and if that isn't what I want, then I'm not a customer, simple as that. 

The OP has decided to discuss these issues and I applaud him for that and think the discussion is worthwhile, even if I might not be a potential customer.  He's clearly not advertising and is quite openly acknowledging his and his labs limitations. 
A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 
The following users thanked this post: Mickle T., mendip_discovery

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5455
  • Country: us
Re: Uncertainty Budgets and Decision Rules
« Reply #39 on: August 10, 2021, 04:04:13 pm »
I must admit this is where I get a little fuzzy on it. I have always known it to be Square and therefore its divided by the squareroot(3). I do suffer with the mathematics and probabilty stuff. I will take a look and see if I can find a referance in my notes to as why its done like that.

Here's an explanation without derivations.

https://www.isobudgets.com/probability-distributions-for-measurement-uncertainty/

Clearly at least some of these are just accepted approximations.  For example the correction for the 'U-shaped distribution' will vary with its shape, approaching one as the shape changes to having all the samples on one wall or the other.  According to some chicken scratching I did while doing the this morning's crossword, the correction for a continuous rectangular distribution may well the square root of three.  That still doesn't quite explain why that is the appropriate fit for incorporating a manufacturer spec into your budget.  I also find it odd that metrology practice would assume bounded distributions.

Presumably since the OEM has removed those outside of some limit from delivery.  While uncertainty in the manufacturers measurement equipment makes the value of this limit somewhat uncertain, the effect on the shape of the delivered distribution is not changed significantly under typical production rules of thumb which suggest that testing equipment have uncertainty less than 10% of the production limits. 

While this once would have been a good assumption, the assumption will be wrong for manufacturers who have achieved TQM goals on their production (very high yield on testing).  In this case the distribution will probably be closer to  normal, but as you move up the number of digits of precision TQM goals become less and less likely to be achieved.

At the very top end it is difficult or impossible to achieve the 10x measurement uncertainty and the bounded distribution becomes less and less appropriate.

The bottom line, the assumption of a bounded distribution is yet another rule of thumb which is often appropriate, but requires examination for best quality work.
 

Offline mendip_discoveryTopic starter

  • Frequent Contributor
  • **
  • Posts: 998
  • Country: gb
Re: Uncertainty Budgets and Decision Rules
« Reply #40 on: August 10, 2021, 06:10:39 pm »
I must admit this is where I get a little fuzzy on it. I have always known it to be Square and therefore its divided by the squareroot(3). I do suffer with the mathematics and probabilty stuff. I will take a look and see if I can find a referance in my notes to as why its done like that.

Here's an explanation without derivations.

https://www.isobudgets.com/probability-distributions-for-measurement-uncertainty/

Clearly at least some of these are just accepted approximations.  For example the correction for the 'U-shaped distribution' will vary with its shape, approaching one as the shape changes to having all the samples on one wall or the other.  According to some chicken scratching I did while doing the this morning's crossword, the correction for a continuous rectangular distribution may well the square root of three.  That still doesn't quite explain why that is the appropriate fit for incorporating a manufacturer spec into your budget.  I also find it odd that metrology practice would assume bounded distributions.

Its from the GUM

Quote
4.3.3 If the estimate xi is taken from a manufacturer's specification, calibration certificate, handbook, or other source and its quoted uncertainty is stated to be a particular multiple of a standard deviation, the standard uncertainty u(xi) is simply the quoted value divided by the multiplier, and the estimated variance u2(xi) is the square of that quotient.

So if the spec has a Specification and they say its where k=2 it would be normal and you use the divisor of 2 much like an imported Unc.

*but*

Quote
4.3.7 In other cases, it may be possible to estimate only bounds (upper and lower limits) for Xi, in particular, to state that “the probability that the value of Xi lies within the interval a− to a+ for all practical purposes is equal to one and the probability that Xi lies outside this interval is essentially zero”. If there is no specific knowledge about the possible values of Xi within the interval, one can only assume that it is equally probable for Xi to lie anywhere within it (a uniform or rectangular distribution of possible values — see 4.4.5 and Figure 2 a). Then xi
, the expectation or expected value of Xi, is the midpoint of the interval, xi = (a− + a+)/2, with associated variance. If the difference between the bounds, a+ − a−, is denoted by 2a, then Equation (6) becomes

Sorry, some of the Formulas have not come through but it basically says if there is a random chance it's likely to be anywhere in between an upper and lower limit then it is likely to be a rectangular distribution. It does later state there is an option if it is normally spot on but might deviate off between these limits you can apply a triangular probability.

ohh Iso Budget guy has also posted about this and has reminded me about the microvolt on connection bit, bloody EMF

Just so people know, I am just talking about this as I would like people to talk about it, and try and get the hobbyists [well any end user] looking at what they are doing and what is coming in. I have customers who use items such as a CMM to measure mechanical parts and when I ask about Uncertainty they just look blank and if pressed will just say the tolerance from the manufacturer.

For the Cal Clubs, I think this can add a new level of fun as you can measure the same thing and state your measurement uncertainty and do a comparison between your uncertainties, often referred to as EN ratio. Labs are encouraged to do inter-lab checks to verify that if another lab was to test a meter just after I had done it the results would be the same if our uncertainty was taken into account.

I have not mentioned my lab as TBH we are a small lab and not up with the big labs yet. Part of me is also using this to get feedback from others that also do this and just reaffirm what I know and maybe learn a thing or two along the way. Its quite amazing when you look outside your bubble on how different countries do their calibration or even inspection. I trained in photography and digital imaging and now I work in a calibration lab so I will be the first to admit this isn't what I trained in, I learnt it the hard way.
« Last Edit: August 10, 2021, 06:21:58 pm by mendip_discovery »
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 

Offline try

  • Regular Contributor
  • *
  • Posts: 112
  • Country: de
  • Metrology from waste
Re: Uncertainty Budgets and Decision Rules
« Reply #41 on: August 10, 2021, 07:52:06 pm »
To answer a few questions so far RSS means "root sum square" its an accepted way to combine all the uncertainties in your budget provided they are at 1 sigma the formula is as follows in excel "=sqrt(uc1^2+uc2^2+uc3^2)"  uc is the separate contributors at 1 sigma. This gives us what is a called standard uncertainty then we can just multiply that by the k factor to get it to an expanded one. I personally calculate my k factor using the welch-satterthwaite formula.

From somebody who is owning and running a cal lab a statement like
[...RSS means "root sum square" its an accepted way to combine all the uncertainties...]

I would have expected a different, lengthier and deeper explanation that allows to understand the formula instead of learning it by heart and applying it without understanding.
That is disappointing!
Kindergarten somehow.

« Last Edit: August 10, 2021, 07:57:11 pm by try »
 

Offline RYcal

  • Contributor
  • Posts: 31
  • Country: nz
Re: Uncertainty Budgets and Decision Rules
« Reply #42 on: August 10, 2021, 08:03:44 pm »
To answer a few questions so far RSS means "root sum square" its an accepted way to combine all the uncertainties in your budget provided they are at 1 sigma the formula is as follows in excel "=sqrt(uc1^2+uc2^2+uc3^2)"  uc is the separate contributors at 1 sigma. This gives us what is a called standard uncertainty then we can just multiply that by the k factor to get it to an expanded one. I personally calculate my k factor using the welch-satterthwaite formula.

From somebody who is owning and running a cal lab a statement like
[...RSS means "root sum square" its an accepted way to combine all the uncertainties...]

I would have expected a different, lengthier and deeper explanation that allows to understand the formula instead of learning it by heart and applying it without understanding.
That is disappointing!
Kindergarten somehow.

And this is where I leave the conversation.
 

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8010
  • Country: us
Re: Uncertainty Budgets and Decision Rules
« Reply #43 on: August 10, 2021, 08:06:08 pm »
I would have expected a different, lengthier and deeper explanation that allows to understand the formula instead of learning it by heart and applying it without understanding.
That is disappointing!
Kindergarten somehow.

I literally just wanted to know what the letters meant since it wasn't obvious to me given where they were found.  I understand the principles, and for those that don't they might be better served by a link to a good explanation of the relationship between variance and standard deviation, as well as a derivation of why you take the square root of sums of standard deviations to combine them instead of just adding them.  The answer, of course, is that variances are what you are actually adding.

Here's the kindergarten version!

https://www.mathsisfun.com/data/standard-deviation.html



A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 
The following users thanked this post: RYcal

Offline try

  • Regular Contributor
  • *
  • Posts: 112
  • Country: de
  • Metrology from waste
Re: Uncertainty Budgets and Decision Rules
« Reply #44 on: August 10, 2021, 08:30:16 pm »
Hello bdunham7,

I am not critizing you, nor any question you asked, but the one statement I quoted.

When you were asking questions about the lingo I totally agree with you. Never read the term "CMC" before and was able to find that using google.
Is "CMC" a term I am supposed to know?

But anyway, I have no expectation anymore.
 

Offline bsdphk

  • Regular Contributor
  • *
  • Posts: 206
  • Country: dk
Re: Uncertainty Budgets and Decision Rules
« Reply #45 on: August 10, 2021, 08:39:54 pm »
The limiting factor is the resolution and that does give you a bit of a hit with regards to your imported ppm.

I wouldn't call it a "limiting factor" as much as a "invalidating factor".

In your example you calculate the standard deviation using formulas which assume a bell-shaped noise process, but your lack of resolution, relative to the noise in the signal gives you a step-noise process (ie: "±1" noise).

When resolution is insufficient for the noise, the stddev thus determined is useless, because it depends on magnitude of the measured artifact.

Assume a perfect digital voltmeter which measures volt with 100mV resolution, it can show 4.8, 4.9, 5.0, 5.1, 5.2 and so on, and it rounds perfectly.

If you measure a 5.0000…V reference, it will constantly show "5.0", it will be doing that all the way from 4.9500…1 to 5.04999.....9 volt input.

Congratulations: You have a meter with zero stddev!

If instead you measure a 5.05000… reference, the meter will show "5.0" half the time and "5.1" the other half and your stddev is now 0.534 V.

That is both a statistically unsound and a practically useless result: The uncertainty (estimate) should not vary cyclically across the range.

There are a number of possible workarounds, but they all boil down to not just measuring a single voltage.

The easiest is to sweep the voltage you measure across at least the full range of the last two digits of the DUT, with an extra digit of resolution (ie: at least 1000 measurements) and calculate the stddev of relative measurement error (ie:  (Vmeasured - Vactual) / Vactual).

If you cannot do that (need K-V divider + lots of knob-fiddling-time), you can instead (carefully) superimpose a (good!) AC signal on top of your reference voltage, in order to provide the bell-shaped noise process required by the formula you use, but you then have to compensate for the noise you added etc.

It sounds absolutely bonkers first time you hear it, but it is *so* much easier to find the stddev for a 3458A than for a handheld 3½ digit meter...
 

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8010
  • Country: us
Re: Uncertainty Budgets and Decision Rules
« Reply #46 on: August 10, 2021, 08:57:51 pm »
Is "CMC" a term I am supposed to know?

I didn't, but it was apparent what it was generally from the context.  I think the block for me with 'RSS' was where it appeared and the fact that RSS means something entirely different to me, in a vastly different context--as does CMC!  That's the problem with acronyms, often they are insider talk and have different meanings in different fields.  The person using the term always knows what it means, of course!
A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8010
  • Country: us
Re: Uncertainty Budgets and Decision Rules
« Reply #47 on: August 10, 2021, 09:34:56 pm »
That is both a statistically unsound and a practically useless result: The uncertainty (estimate) should not vary cyclically across the range.

I thought this was the point of adding an additional uncertainty based solely on the resolution limit?  And according to his calculation, that works to about what your worst case measurement would be?  And, of course, this is in addition to any observed or specified uncertainty in the readings.

A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 

Offline bsdphk

  • Regular Contributor
  • *
  • Posts: 206
  • Country: dk
Re: Uncertainty Budgets and Decision Rules
« Reply #48 on: August 11, 2021, 06:18:04 am »
I thought this was the point of adding an additional uncertainty based solely on the resolution limit?

It probably is, but that seems to be to just be a crude fudge to avoid having to do the necessary measurements.

Look at my example again:  Depending on what precise voltage the cal-lab happens to use, you either add that additional uncertainty to a measured stddev of zero or 0.0534, giving very different results, depending on which precise voltage was used for calibration.

In practice it is probably not much of a problem, because nobody really cares about the actual stddev of low resolution instruments, and for high resolution instruments there will generally be sufficient electrical noise to satisfy the statistical assumptions.

I reacted to this because the example you used was almost a school-book example of the problem:  When your measurements have ±1 nature, ie: digital presentation, it should have no effect on the resulting uncertainty, if you change any single measurement one step up or down and your example failed that test.

As a general rule of thumb: If you only have one, two or three different sequential numbers in your data, stddev is not what you want.

This topic is borderline into "Uncertain Numbers", and if you want to understand it better, I can highly recommend these two reports:

Sensitivity in Risk Analyses with Uncertain Numbers: https://www.osti.gov/biblio/886899/

Constructing probability boxes and Dempster-Shafer structures: https://www.osti.gov/biblio/1427258

They are heavy, but you do not need to understand it all, much less follow all the math, to learn how to spot the quick-sand.







 

Offline MegaVolt

  • Frequent Contributor
  • **
  • Posts: 930
  • Country: by
Re: Uncertainty Budgets and Decision Rules
« Reply #49 on: August 11, 2021, 08:20:26 am »
mendip_discovery thank you for this thread. I read it with great interest.

Can you give an example of a complete calculation for example for calibrating 10V exit Fluke 732a (1) using 3458A and other Fluke 732a (2) to which there is data from the calibration laboratory of the top level.

Those. So we took and received 100 measurements that show the difference between both standards. We know the parameters of the Laboratory Fluke 732a, the temperature in the laboratory, and the 3458A passport data.

Can you make an example of how we get the data for the calibrated Fluke 732a (1)?
 
The following users thanked this post: TiN, MiDi


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf