Here is a better example being sure to use both the 10V scale (for the auto-range sense measurement) and the 1V scale manually set (always HiZ when using a Fluke 732B 1.018 resistor divider output).
The premise is that hp / Agilent DMM performs the DCV ratio function by making independent measurements at the Vsense terminals and the Vin terminals as opposed to a "true" ratiometric measurement (e.g. using one input to stand-in temporarily for the internal reference)
All measurements were made using an Agilent 34461A 6.5 digit DMM using a 10 power line cycle (plc) integration time with statistics averaging. Averaging was used to run "full out" in resolution. It is understood that thermal emfs likely caused errors rendering these values off "absolute" at 6 to 7 digits (unimportant for the experiment / demonstration of DCV ratio).
A) Measure the FLUKE 732B 10 V and 1.018 V outputs using the DCV function with statistics averaging
100 samples, 10 plc, 10 V 10 Meg, 1.018 V HiZ
1.018 V: 1.018 165 0 V
10 V: 9.999 987 V
Ratio: (30 samples) R101.816 93 m (ratio, no units)
B) Intentionally change the 34461A 1 V scale calibration (gain) by about +30 uV (30 ppm for a 1 V full scale)
EDC 520A, set 1 V on 61A: 1.000 004 5 V
EDC 520A, set +30 uV error signal: 1.000 035 7 V (about +30 uV)
Calibrate 34461A DCV 1 V scale to 1.000 035 7 V, by telling cal procedure, that 1.000 035 7 V is 1.000 000 0 V
EDC 520A, set +30 uV error signal: 1.000 035 7 V (about +30 uV): 34461A now reads 0.999 999 8 V
C) Repeat step A) with cal error of step B)
1.018 V: 1.018 129 5 V (compare with before: 1.018 165 0 V)
10 V: 9.999 985 V
Ratio: (30 samples) R101.813 3 m (ratio, no units) (compare with before: R101.816 93 m)
Conclusion: The DCV ratio function is dependent on Vin, Vsense, the linearity of the ADC, the calibration (gain and offset) of the 1 V DCV scale, and the calibration (gain and offset) of the 10 V scale when comparing a 10 V or a 7 V reference signal (Vsense) to a 1 V source (Vin) under calibration. My understanding is that the 34401A DC Ratio function works the same way. The later more accurate 34410A (the "10A") dropped the DCV Ratio function.
Probably, short of a short term calibrated hp 3458A, one of the best ways to approach a 1 V absolute calibration, based on a 10 V "known" absolute "correct" value, is still by the Fluke 752A Hamon method. Hamon experiments can be done in a small lab, understanding that results are only short term valid (if at all) to some precision. Here are some references:
http://www.gellerlabs.com/752AJunior.htm .
For 5.000 00 V, It would also seem valid to use two 5 V adjustable reference sources and have the series connection (yes, you need to worry about the sum of TE junctions in the connections) nulled to a known 10 V calibration source. Then one could reverse the connections and check for zero volts and iterate until both 10 V is matched and each 5.000 V source matches each other (I suppose that could be a null check 5 V to 5 V too). The idea is that both reference need to be both the "same value" and the "same value" that adds up to "exactly" match the 10 V reference.
Then, one of the 5.000 00 V cal references might be suitable for short term checking the DMM 10 V scale at 5 V, or for setting by null techniques a third reference to one of the 5.000 00 V set references. As with any calibration exercise, to be rigorous, one would need to try to identify possible errors, and set the resulting range of uncertainty accordingly.