With a working AZ mode, the noise curve should go down even for long times even beyond the 300 seconds. There is not need to choose an integration time near the minimum of the curve - one can always do later averaging on the data. Most modern DMMs get the longer (e.g. > 100 PLC) integration times from averaging anyway.
The use of faster sampling is more question of data rate and memory / file size - not a big problem anymore. So even with longer time logging one can use quite a fast sampling and do filtering / averaging as needed later. Also there often is no choice of time scale - it is set by the experiment.
Also keep in mind that with real, non zero readings, there will be additional noise from the reference. This is not captured in the thread so far - but it can be quite important at a slower time scale. Most refs. show quite some flicker noise, so the curve including the reference will go up after some time. So for comparison one could include the noise of typical refs (e.g. LM399 and LTZ1000). Looking at the zero point noise in the 10 V range is a little odd point to test, as for a real signal near zero one would often use a smaller range.
based on your info, i am starting to understand more about integration time and noise. i went to find the longest log i have, and TADA ! well at least i know that is not how to use "point (c)"
but i did find an alternative curve which may fit a point (c) in an alternate way.
by changing the confidence level from 1 sigma to 3 sigma and using flicker FM boundary. the range where noise is included or precluded can be seen, and there is a low position where a "point (c)" could be found on top of the upper bound tips. by longer integration, it captures more and more LF noise which is what is going on as the boundary expands at the tail (esp for noisy DMM).
plot 2015Nov02_2357_21DD.gif is 0.1v, 1NPLC (sigma = 3). 30k samples
plot 2015Nov01.gif is 10v 10 NPLC (sigma = 3), 20k samples
plot 2015Nov01b.gif is 10v 10 NPLC (sigma = 2), 20k samples
for ease , we assume all samples to be taken at 1sample per second.
these are plots i think way before any mods, so the DMM is quite noisy. in the 3rd pic using sigma = 2, it seems by using an even longer integration beyond 1000seconds, the 100nV resolution can become "statistically" useful. if i use sigma = 3, between 1000-2000s of integration barely makes 100nV of some use (which is more than total effective NPLC of over 10,000. i guess this is why this DMM was sent to the dumpster?). in the 100mV scale, it only needs 32seconds to make 100nV useful (sigma = 3, which is effective total NPLC of only 32). so if i am using a noisy DMM like this in 100mV for long term logging, i will assume it be best to integrate to 32seconds to obtain usable (and repeatable) data down to 100nV in order to exclude the most amount of noise.
would my intention of using allan variation in this way be applicable? it makes sense i would think?
so my guess for 3458a to resolve 10nV, a suitable allan plot can be taken to find the optimal integration time too?
it will also be interesting to turn on all the noisy appliances in the house and do an allan plot vs 1 that has nothing on.
this link talks about a diff allan variation program, but it has more details about the different noise types and the integration slope leading to lowest noise
http://www.stable32.com/paper2ht.htm which gave me the idea of the lowest point thingy
**edit, but looking at Tin's 7v log "7v_3458_nplc200_tin_goodA3.csv". we see that the noise is now an increasing slope. by using allans variance, we can see that in order to resolve 1uV, we should sample below 16seconds. does this make metrological sense?
same in the case of the 10k resistor, "time_10k_dmm_3458_nplc100_tin.csv", to resolve to 0.01ohm, we need to sample within 64s, in the case of the resistor, there is a point (c) @ 2~4s.