Author Topic: Tolerance stackup  (Read 5697 times)

0 Members and 1 Guest are viewing this topic.

Offline edbaTopic starter

  • Contributor
  • Posts: 25
  • Country: gb
Tolerance stackup
« on: October 25, 2021, 07:03:14 pm »
I am currently working on a problem calculating the total tolerance error of a circuit for a test specification.

Most of the stuff on the internet concerning tolerance stackup is concerned with mechanical engineering, although I think it will apply equally to electrical/electronic engineering as well.

Ok as an example say I have a adjustable voltage regulator where the voltage is set by two resistors of +/-1% tolerance and say the voltage reference of the regulator is +/-1.5%. Now there are three independent variables. A worst case analysis would put the tolernce as 1+1+1.5 = 3.5%. This is quite a pessimistic view of the tolerance. A more realistic one is aquired by using the Root Sum Square (RSS) method. Here you square all the values, add them together, then take the square root so (1^2+1^2+1.5^)^0.5 = 2.06%.

My question is has anybody got experience of using RSS and no of a good tutorial and why does it work?
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14861
  • Country: de
Re: Tolerance stackup
« Reply #1 on: October 25, 2021, 07:18:31 pm »
The RSS is the normal method in many cases, though there are usually additional factors to take into account on how much each part / parameter effects the result. With a resistor divider or gain network this can give an additional factor slightly smaller than 1. So in addition to the tolerance there is the factor on how much the ouput changes with the parameter. This factor is also squared, like the tolerance.

The addition as RSS is correct for the standard deviation of independent variables with normal distribution. It is not correct for something like the worst case. Electronic parts may be binned to be worst case inside a tolerance area so the RSS may be only an approximation than.
With many small contribution the oveally error will become normal distributed. So at least for the smaller parts the RSS form is usually OK.

 

Offline mendip_discovery

  • Frequent Contributor
  • **
  • Posts: 998
  • Country: gb
Re: Tolerance stackup
« Reply #2 on: October 25, 2021, 07:57:43 pm »
RSS is how uncertainty is done, so some info will be on the isobudgets site. I did attempt to show an example budget for the 121GW but it wasn't easy to get it over to people, not sure this forum is ready for the discussion.

If you want I can share a blank budget calculator you can use.

Square root is not your only option as you might be able to argue some things are distributed differently such as triangular.

But I have never been on the building side of things so I am not sure how the manufacturer's build-up characterisations of devices, lots of testing I would say.

If the resistance is at the extremes of the 1% resistors how much of an effect will it have on the final measurement at max measured voltage? You could possibly use a coefficient.
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 
The following users thanked this post: bck

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8012
  • Country: us
Re: Tolerance stackup
« Reply #3 on: October 25, 2021, 08:21:32 pm »
My question is has anybody got experience of using RSS and no of a good tutorial and why does it work?

Why it 'works', to the extent that it actually does, is simply math and it simply means that if you have two uncorrelated sets of data with n points each, say A and B, each with a standard distribution, then the sums of the respective data points An + Bn will also be in a standard distribution with a standard deviation that is the RSS of the SD of A and B. 

Why it might not work for you is that your data sets may not actually be uncorrelated and/or they may not be in a standard distribution.  I think it would be an egregious error to assume any component has a standard distribution centered around its nominal spec.  Also, when you are making a product, or doing anything else for that matter, you have to decide how many standard deviations to set your tolerances.  With a sigma of 1, about 1/3 of your results will be out of limits.  So what does a "1% resistor" actually imply?  In your example, is 3.5% really the worst case?  Generally your choices are to either test yourself, pay a lot for better characterization of components or use greater margins.
A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 

Online TimFox

  • Super Contributor
  • ***
  • Posts: 8575
  • Country: us
  • Retired, now restoring antique test equipment
Re: Tolerance stackup
« Reply #4 on: October 25, 2021, 08:23:23 pm »
One could simplify this answer to "RSS is probably right, but the worst-case answer is never wrong."
One thing to watch out for when doing RSS analysis is to avoid correlated variables--the statistical analysis assumes that each variable entering into the sum is statistically independent of the others.
When doing RSS sums, one can either add absolute values or fractional values, e.g. 1% resistors and +/- 0.5 V batteries.  If you have both, then convert the errors to the same type (absolute or fractional).
It's a little tricker to estimate the error on a voltage divider:  start by writing the explicit algebraic formula in terms of the two resistors.
 

Offline mendip_discovery

  • Frequent Contributor
  • **
  • Posts: 998
  • Country: gb
Re: Tolerance stackup
« Reply #5 on: October 25, 2021, 08:31:07 pm »
Using a basic voltage divider[1] I can see it only changing the measurement by 0.6% which I think you can apply a coefficient of 0.06.

So I get a % error of 1.734%



Edit, thinking of it my error was actually 0.3% for a change of 1 resistor so it drops to 1.7327%

But there will be other elements to add, temp effects such as change in resistance caused by the resistor warming up, yearly drift etc etc. It could be quite fun to work it all out, you might have to offer to make Dave a chicken dinner and see if he will do a video on how to mathematically do the math.

[1] https://learn.sparkfun.com/tutorials/voltage-dividers/all
« Last Edit: October 25, 2021, 09:02:02 pm by mendip_discovery »
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8012
  • Country: us
Re: Tolerance stackup
« Reply #6 on: October 25, 2021, 08:52:34 pm »
Square root is not your only option as you might be able to argue some things are distributed differently such as triangular.

If you are applying accepted professional standards to a particular task then you can refer to those regarding the supposed distribution.  Otherwise, I think that instead of 'argue' you should say 'demonstrate with data and/or math'. 



A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8012
  • Country: us
Re: Tolerance stackup
« Reply #7 on: October 25, 2021, 09:07:30 pm »
Using a basic voltage divider[1] I can see it only changing the measurement by 0.6% which I think you can apply a coefficient of 0.06.

So I get a % error of 1.734%

Edit, thinking of it my error was actually 0.3% for a change of 1 resistor so it drops to 1.7327%

Where are you getting those numbers?  Are you saying the sensitivity to a percentage change in either R1 or R2 is independent of their actual values or ratio?  Did you try that with a few different numbers? And why did you move the decimal point (go from 0.6 to 0.06)?
A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 

Offline bck

  • Contributor
  • Posts: 15
  • Country: de
Re: Tolerance stackup
« Reply #8 on: October 25, 2021, 09:49:42 pm »
You can use "GUM Workbench Edu" for that. (Free Edu version)

1307222-0
I'll make the calculation for that example setup.
Open a new Page and click on "Model Equation"
Now just enter how your variables depent on eachother.
For our example that is: Vout = (R2 / (R1+R2)) * Vref if you click at the botton, the Quantity table should apper and you can enter the units and a short info.
1307228-1
Now move to the Tab "Quantity Data" and enter the tolerances/values for your resistors.
Variable|Type|Distribution|Value|Halfwidth of Limits
R1|B|Rect.|1000|10
R2|B|Rect.|1000|10
Vref|B|Rect.|10|0.15
Means for example R1 is 1000Ohm +-10Ohm (1%)
After entering the values click on "Budget" to see your result.
1307234-2

At the bottom is the result (5V +- 0.096V)

Standard Uncertainty: The application converts our rectangular distributed uncertainty to normally distributed uncertainty with k=1, that is done by dividing the halfwidth by sqrt(3).
The "Sensitivity Coefficent" tells you how your output changes if your input changes. Example: if Vref changes by 1Volt, the output will change by 0.5Volt.
The "Unvertainty Contribution" is "Standard Uncertainty"/"Sensitivity Coefficent"
~ Alexander Becker
 

Offline mendip_discovery

  • Frequent Contributor
  • **
  • Posts: 998
  • Country: gb
Re: Tolerance stackup
« Reply #9 on: October 25, 2021, 10:09:00 pm »
Using a basic voltage divider[1] I can see it only changing the measurement by 0.6% which I think you can apply a coefficient of 0.06.

So I get a % error of 1.734%

Edit, thinking of it my error was actually 0.3% for a change of 1 resistor so it drops to 1.7327%

Where are you getting those numbers?  Are you saying the sensitivity to a percentage change in either R1 or R2 is independent of their actual values or ratio?  Did you try that with a few different numbers? And why did you move the decimal point (go from 0.6 to 0.06)?

Coefficient is normally 1, 0.6% of 1 is...

I used the calculator on there and applied 1% to the R1, then tried it with R2 and did all the variations. Of % error and I got 0.02V with just one resistor error, and 0.04V so i did the percentage calc from that.

I'm not an expert just trying to apply enough stupidity to try and work though the problem.

Would be nice to know the V in, and out and the resistors planned as that way we could all play with the maths at a closer idea.

So the resistors have a gaussian probability but the centre can shift so does that make it rectangular? I'm not sure, I ain't a mathematician.
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8012
  • Country: us
Re: Tolerance stackup
« Reply #10 on: October 25, 2021, 10:35:39 pm »
Coefficient is normally 1, 0.6% of 1 is...

um...not 0.06!

I see no justification for converting from one distribution type to another if it is unknown in the first place.  I don't know of any convention in engineering that allows you to opt for the assumption of a rectangular distribution given a tolerance with no further information--if there is such a convention, someone tell me about it.  You're just applying a method which apparently is narrowly sanctioned in a specific field and applying it broadly.

Lets just look at the error a 1% deviation would cause.   I'll go with R1 = R2 = 1k and Vref = 10V just to pick some numbers.  So if R1 is 1% high, Vout = Vref * R2 /(R1 + R2) = 10 * 1/2.01 = 4.975V which is 0.5% low.  (5 - 4.975 = .025, .025/5 = 0.005 = 0.5%)  A 1% deviation in R1 causes a 0.5% in the result Vout so I'd assume your 'coefficient' should be 0.5 unless I'm misunderstanding what 'coefficient' means here.  A deviation in R2 will have the same impact at this ratio of R1/R2.

The ratio of the resistors matters, and the coefficient will change with it.  The RSS of 0.5, 0.5 and 1.5. is 1.66, so 1.66% would be my answer for R1=R2.
A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 

Offline mendip_discovery

  • Frequent Contributor
  • **
  • Posts: 998
  • Country: gb
Re: Tolerance stackup
« Reply #11 on: October 25, 2021, 11:17:35 pm »
Coefficient is normally 1, 0.6% of 1 is...

um...not 0.06!

I see no justification for converting from one distribution type to another if it is unknown in the first place.  I don't know of any convention in engineering that allows you to opt for the assumption of a rectangular distribution given a tolerance with no further information--if there is such a convention, someone tell me about it.  You're just applying a method which apparently is narrowly sanctioned in a specific field and applying it broadly.

Lets just look at the error a 1% deviation would cause.   I'll go with R1 = R2 = 1k and Vref = 10V just to pick some numbers.  So if R1 is 1% high, Vout = Vref * R2 /(R1 + R2) = 10 * 1/2.01 = 4.975V which is 0.5% low.  (5 - 4.975 = .025, .025/5 = 0.005 = 0.5%)  A 1% deviation in R1 causes a 0.5% in the result Vout so I'd assume your 'coefficient' should be 0.5 unless I'm misunderstanding what 'coefficient' means here.  A deviation in R2 will have the same impact at this ratio of R1/R2.

The ratio of the resistors matters, and the coefficient will change with it.  The RSS of 0.5, 0.5 and 1.5. is 1.66, so 1.66% would be my answer for R1=R2.

Dammit I should know bette than do maths after dark and while watching some TV. The numbers I used for the resistors were the one they used in the example so i may have some % error due to resolution of the calculator.

I am applying what I know to a problem. Not worried if I am wrong, but most of us here are armchair experts. I was just thinking of how I would look at the issue.

You would normally use the coefficient to convert 1 number into a similar number to the other. So if you were doing budget in ppm you could use it to convert the % into ppm by applying a ratio. I have seen it used to reduce numbers to bring them inline with others, you have a big table with a flatness of x but as you are using only 1/4 of that table you can use the coefficient to bring it down by 1/4. Later on once I have slept I will drag up an example of using it to correct for temperature variations on an item which is used to import the temperature into microns when you know the thermal expansion of gauge blocks (11.5um per m per degree).
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17215
  • Country: us
  • DavidH
Re: Tolerance stackup
« Reply #12 on: October 26, 2021, 01:15:06 am »
The distribution of the output can be calculated from the distributions of the individual tolerances, however in the real world, the guaranteed peak error is likely more important, and what would be used for interval arithmetic.  I want a guaranteed error, not an estimate.

When I design a system with a digital readout, I care less about the distribution of output values than the peak error, which incidentally is why I consider controlling flicker noise and linearity very important.
 

Offline mzzj

  • Super Contributor
  • ***
  • Posts: 1285
  • Country: fi
Re: Tolerance stackup
« Reply #13 on: October 26, 2021, 07:04:50 pm »

I see no justification for converting from one distribution type to another if it is unknown in the first place.  I don't know of any convention in engineering that allows you to opt for the assumption of a rectangular distribution given a tolerance with no further information--if there is such a convention, someone tell me about it.  You're just applying a method which apparently is narrowly sanctioned in a specific field and applying it broadly.

I can somehow "see" where mendip got the rectangular distribution idea but I'm not sure if it is applicable to this situation.
Rectangular distribution is usually used if you don't have any better idea and you can only estimate/assume upper and lower bounds of error or tolerance. (often called type B uncertainty)
Section 7.1.2 https://www.npl.co.uk/special-pages/guides/gpg11_uncertainty.pdf     
 

Offline mzzj

  • Super Contributor
  • ***
  • Posts: 1285
  • Country: fi
Re: Tolerance stackup
« Reply #14 on: October 26, 2021, 07:27:54 pm »
The distribution of the output can be calculated from the distributions of the individual tolerances, however in the real world, the guaranteed peak error is likely more important, and what would be used for interval arithmetic.  I want a guaranteed error, not an estimate.

When I design a system with a digital readout, I care less about the distribution of output values than the peak error, which incidentally is why I consider controlling flicker noise and linearity very important.
Absolute hard values are difficult. First of all all of your reference instruments are calibrated only with 95% uncertainty... :P

Another approach would be to use larger coverage factor: Standard uncertainty is only at ~70% level.  Calibration or measurement uncertainty is usually stated at k=2 or 95% level but you could also use for example k=4 that gives 99.99% coverage.
1mV measurement uncertainty with 95% probability is same as 2mV with 99.99% (and there is some potholes along the way if we dig deeper..)


 
 

Offline mendip_discovery

  • Frequent Contributor
  • **
  • Posts: 998
  • Country: gb
Re: Tolerance stackup
« Reply #15 on: October 26, 2021, 07:59:25 pm »
I can somehow "see" where mendip got the rectangular distribution idea but I'm not sure if it is applicable to this situation.
Rectangular distribution is usually used if you don't have any better idea and you can only estimate/assume upper and lower bounds of error or tolerance. (often called type B uncertainty)
Section 7.1.2 https://www.npl.co.uk/special-pages/guides/gpg11_uncertainty.pdf   

I work in a 17025 cal lab so Rectangular is where you head at the initial stages of the maths until you start to look deeper.

I had a little think about it last night and did a little maths just no. I was just interested to know the effect the resistors have on the division.




Now I am fully aware I may be wrong but I hope I am barking down the right street.

« Last Edit: October 26, 2021, 08:04:50 pm by mendip_discovery »
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14861
  • Country: de
Re: Tolerance stackup
« Reply #16 on: October 26, 2021, 08:26:13 pm »
The sensitivty coefficent is still wrong:  Why divide by 100 ? . The example calcualtion (to avoid the derivartive) uses 1 % resistor change and give about 0.5% output change, so a 0.5 sensitivity factor.  For slightly different resistor ration of the original the number will be a little off from 0.5, but not that much.
 

Offline mendip_discovery

  • Frequent Contributor
  • **
  • Posts: 998
  • Country: gb
Re: Tolerance stackup
« Reply #17 on: October 26, 2021, 08:37:26 pm »
The sensitivty coefficent is still wrong:  Why divide by 100 ? . The example calcualtion (to avoid the derivartive) uses 1 % resistor change and give about 0.5% output change, so a 0.5 sensitivity factor.  For slightly different resistor ration of the original the number will be a little off from 0.5, but not that much.

1 is a whole aka 100% of the value being processed but only 0.5% of 1 is needed. EDIT: Yeah I ain't the brightest spark. Had a play with it.

But lets say I use the example SparkFun uses, 5Vref, R1=1700, R2=3300, Out is 3.3V the %error is 0.34%

« Last Edit: October 26, 2021, 09:07:19 pm by mendip_discovery »
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 

Offline MichaelPI

  • Contributor
  • Posts: 32
  • Country: de
Re: Tolerance stackup
« Reply #18 on: October 31, 2021, 02:46:38 pm »
Another approach can be monte-carlo simulations. You can use this method to find the variation of the output variable you are interested in considering the variations of your input variables.
It is usually used to solve more complex problems in a numerical way, but you can also apply it to your adjustable voltage regulator example.

First of all you need to have an understanding of the distribution and variation of your input variables. A normal distribution may or may not be the correct distribution. E.g. a resistor supplier could remove the resistors with tighter tolerance from your assumed normally distributed +/- 1% resistors and sells them as +/- 0.5 %. You could have zero resistors that are close to your expected mean and the distribution could look like an 'U' instead of a bell curve.

Let us assume we have a normal distribution. The next step is to understand, what the +/- 1 % tolerance actually means. Should it be +/- 1 standard deviation, +/- 2 or +/-3 standard deviations. In the EXCEL sheet the assumption is 95%, which actually means a deviation of +/- 2 sigma (+/- 2 standard deviations) - that implies that you expect approx. 5% of the resistors to be out of the tolerance limits.

 
If you have done that, you can use e.g. SCILAB (EXCEL or as far as I know LTSpice should also do the trick) to generate random data (e.g. 100 samples) for each of the input variables R1, R2, V_REF based on the distribution you selected.

In my example I have created 10000 samples normally distributed for R1, R2, V_REF, the tolerance is assumed as +/- 2 sigma (~95% of the values are within the tolerance). The output voltage of the adjustable voltage regulator gets then calculated based on the equation (V_OUT_VAR = ((R1_VAR+R2_VAR)./R1_VAR).*V_REF_VAR) for all the samples. You can clearly see, that the output variation is again a bell shaped curve. You can then calculate the standard deviation of the output data.

In this particular example I get a standard deviation of approx. 1.07 % of the output curve - that means that typically ~68 % of your parts provide an output voltage within +/- 1.07%.

If I use +/- 2 sigma (2*standard deviations), the tolerance changes to 2.14 % deviation with a probability of ~95 % to be within the 2.14 %.

I hope this helps.

Regards


 


 


SCILAB 6.1.1 code example:
Code: [Select]
clear();

R1_AVG = 10*10^3;                            // in ohm
R2_AVG = 10*10^3;                            // in ohm
V_REF_AVG = 1.25;                                // in V

R1_TOL_PERCENT = 1;                           // in % at sigma level
R2_TOL_PERCENT = 1;                           // in % at sigma level
V_REF_TOL_PERCENT = 2;                        // in % at sigma level


R1_TOL_ABS = R1_AVG*R1_TOL_PERCENT/100;      // in ohms at sigma level
R2_TOL_ABS = R2_AVG*R2_TOL_PERCENT/100;      // in ohms at sigma level
V_REF_TOL_ABS = V_REF_TOL_PERCENT*V_REF_AVG/100; // in V at sigmal level


SIGMA_LEVEL = 2;                              // +/- 2 s => ~95 percent of all values
                                              // are within the defined tolerance range


NUMBER_OF_RUNS = 10000;
NUMBER_OF_CLASSES = 100;

R1_VAR = grand(NUMBER_OF_RUNS, 1, "nor", R1_AVG, R1_TOL_ABS/SIGMA_LEVEL); // in R
R2_VAR = grand(NUMBER_OF_RUNS, 1, "nor", R2_AVG, R2_TOL_ABS/SIGMA_LEVEL); // in R
V_REF_VAR = grand(NUMBER_OF_RUNS, 1, "nor", V_REF_AVG, V_REF_TOL_ABS/SIGMA_LEVEL); // in V
V_OUT_VAR = ((R1_VAR+R2_VAR)./R1_VAR).*V_REF_VAR; // in V


V_OUT_VAR_RELATIVE = ((V_OUT_VAR./(((R1_AVG+R2_AVG)/R1_AVG)*V_REF_AVG))-1)*100; // in %

V_OUT_STD = stdev(V_OUT_VAR_RELATIVE);
V_OUT_MEAN = mean(V_OUT_VAR);

subplot(5,1,1);
histplot(NUMBER_OF_CLASSES, R1_VAR, normalization=%f);
title("R1 variation");
xlabel("Resistance in Ohm");
ylabel("Number of samples");

subplot(5,1,2);
histplot(NUMBER_OF_CLASSES, R2_VAR, normalization=%f);
title("R2 variation");
xlabel("Resistance in Ohm");
ylabel("Number of samples");

subplot(5,1,3);
histplot(NUMBER_OF_CLASSES, V_REF_VAR, normalization=%f);
title("V_REF variation");
xlabel("Voltage in V");
ylabel("Number of samples");

subplot(5,1,4);
histplot(NUMBER_OF_CLASSES, V_OUT_VAR, normalization=%f);
title("V_OUT variation in V");
xlabel("Voltage in V");
ylabel("Number of samples");
legend("Mean: "+string(V_OUT_MEAN)+"V");

subplot(5,1,5);
histplot(NUMBER_OF_CLASSES, V_OUT_VAR_RELATIVE, normalization=%t);
title("V_OUT variation in %");
xlabel("Relative deviation of V_OUT");
ylabel("Relative probability");
legend("Stdev: +/-"+string(V_OUT_STD)+"%");





Keithley 2700 + 7700, Prema 5000, Fluke 77, Hioki 3256-50, Sonel MIC30, EA-PS2332-025, Delta Electronica SM1540, Toellner 7402, Hameg 8131-2, HP 53181A, HP 5334B, Rigol DS1054Z, Philips 6303, Sefelec MGR10C
 
The following users thanked this post: DH7DN

Offline DH7DN

  • Regular Contributor
  • *
  • Posts: 129
  • Country: de
    • DH7DN Blog
Re: Tolerance stackup
« Reply #19 on: October 31, 2021, 05:44:14 pm »
Isn't the resistor tolerance just a statement of conformity? It says "the true value of this resistor deviates from a nominal value within a specified tolerance band (e. g. +/- 1%)". There is no information provided about the resistor uncertainty or its probability distribution. So maybe a rectangular distribution can be assumed for further calculations according to GUM.
vy 73 de DH7DN, My Blog
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14861
  • Country: de
Re: Tolerance stackup
« Reply #20 on: October 31, 2021, 08:05:43 pm »
The distribution of resistor values can be even worse: with some types they select the good ones and sell them as low tolerance and sell the rest with high tolerances. This way the good parts from the center are removed. This can apply to some times, but does not apply to all types.
Besides selection, the other method is a trim to a target values with more or less time spend to get the tolerance classes. So the lower grade just go through the machine faster, or a lower grade machine for the trim.

A similar mechanism may apply to the reference. At least the high grades are usually timmed and checked / sorted.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17215
  • Country: us
  • DavidH
Re: Tolerance stackup
« Reply #21 on: October 31, 2021, 09:56:16 pm »
The distribution of resistor values can be even worse: with some types they select the good ones and sell them as low tolerance and sell the rest with high tolerances. This way the good parts from the center are removed. This can apply to some times, but does not apply to all types.

I know others have reported it for less reputable brands, but I have never seen it myself.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf