The limiting factor is the resolution and that does give you a bit of a hit with regards to your imported ppm.
I wouldn't call it a "limiting factor" as much as a "invalidating factor".
In your example you calculate the standard deviation using formulas which assume a bell-shaped noise process, but your lack of resolution, relative to the noise in the signal gives you a step-noise process (ie: "±1" noise).
When resolution is insufficient for the noise, the stddev thus determined is useless, because it depends on magnitude of the measured artifact.
Assume a perfect digital voltmeter which measures volt with 100mV resolution, it can show 4.8, 4.9, 5.0, 5.1, 5.2 and so on, and it rounds perfectly.
If you measure a 5.0000…V reference, it will constantly show "5.0", it will be doing that all the way from 4.9500…1 to 5.04999.....9 volt input.
Congratulations: You have a meter with zero stddev!
If instead you measure a 5.05000… reference, the meter will show "5.0" half the time and "5.1" the other half and your stddev is now 0.534 V.
That is both a statistically unsound and a practically useless result: The uncertainty (estimate) should not vary cyclically across the range.
There are a number of possible workarounds, but they all boil down to not just measuring a single voltage.
The easiest is to sweep the voltage you measure across at least the full range of the last two digits of the DUT, with an extra digit of resolution (ie: at least 1000 measurements) and calculate the stddev of relative measurement error (ie: (Vmeasured - Vactual) / Vactual).
If you cannot do that (need K-V divider + lots of knob-fiddling-time), you can instead (carefully) superimpose a (good!) AC signal on top of your reference voltage, in order to provide the bell-shaped noise process required by the formula you use, but you then have to compensate for the noise you added etc.
It sounds absolutely bonkers first time you hear it, but it is *so* much easier to find the stddev for a 3458A than for a handheld 3½ digit meter...