I decided to do the print format edit first. Now everything is lined up and prints out values in volts and amps. So no mucking about with annyoing scientific notation. I also reduced the settling time to 2 seconds for volts and 3 seconds for current. This is in anticipation of the larger cal point arrays, which will make the calibration take longer.
Let me know if you have any problems with new changes and thanks for using the script!
UPDATE: There was a minor typo in TelnetTest that I fixed. It would cause the DMM to report an error because of an unfinished measuremnt. Accidentally called readDMM() instead of dmm.read() inside the for loop. Whoops.
Now it look very good, thank you :
ch1 DAC-V calibration
step 0, cal point: 0.0v, meas val: -0.2345v
step 1, cal point: 0.01v, meas val: -0.2349v
step 2, cal point: 0.03v, meas val: -0.2352v
step 3, cal point: 0.1v, meas val: -0.2554v
step 4, cal point: 0.2v, meas val: -0.1541v
step 5, cal point: 0.7v, meas val: 0.3462v
step 6, cal point: 1.0v, meas val: 0.6448v
step 7, cal point: 1.2v, meas val: 0.8427v
step 8, cal point: 1.7v, meas val: 1.3375v
About extended calibration points . I've done it (using your script) but I don't see any difference. My problems on CH (0-40mV) is still present and can be solved only by manual calibration and other channels are still about the same than before .
1mV output > 1.91mv
2mV output > 2.94mV
3mV output > 3.99mV
I have done better results before . Looks like DP832A cannot benefit from more granular calibration points . Or maybe values must follow a specific algorithm ?
I have attached script used and output .
Maybe you or other can repeat the test . Maybe I was doing something wrong .