It is interesting that the line on the graticule seems to be more accurate than the number (Mean).
This is the formula that calculates the "Mean" value from the raw FPGA data.
Mean = (ChanGain * ((((ChanVolDiv / 25) * DataSum) / DataCount) - (ChanVolDiv * ChanPosZeroVolt / 25))) / 970
ChanGain is default 1000 and can be changed in steps of 2 by entering "Calib:AC gain" mode
ChanVolDiv is the vertical sensitivity
DataSum is the sum of all FPGA signed 8-bit values
DataCount is the number of FPGA values
ChanPosZeroVolt is the zero voltage position (can be adjusted by moving waveform up/down)
25, 2000 and 970 are fixed
Example:
(1000 * ((((50000 / 25) * 6583) / 1200) - (50000 * -56 / 25))) / 970 = 126774 µV