As Wytnucls said, the meter uses some internal reference to COMPARE the voltage under measurement with the internal reference. In the old-school d'Arsonval, the "reference" was a very small, internal "hair-spring". In modern solid-state, digital meters, a very crude process known as "approximation" is done.
From the reference voltage they use resistors to create all the intermediate voltages: 0.1, 0.2, 0.3 etc....
Then they ask, over and over again, is the measured voltage = 0.1?, is it = 0.2, is it = 0.3, is it = 0.4, is it = 0.5, etc. etc. Then, when they find a match, it is displayed on the readout. NOTE: This is an extremely truncated and over-simplified explanation, but I think it is what stitch is asking.
As Jeremy explained, there is a different kind of analog to digital converter ("flash converter") where they compare the input voltage SIMULTANEOUSLY from 0.0V all the way up to to 2.56V That method is much faster, but it takes more power and has lower resolution. Now in some applications (very fast signals like video, etc.) that trade-off is worth the extra power and lower resolution. But typically it is not suitable for a proper test meter.
That is how modern digital multi-meters measure VOLTAGE. They measure CURRENT simply by using Ohm's Law and measuring the voltage drop across a resistor (actually a very low-resistance "shunt"). And they measure RESISTANCE, again by Ohm's Law, where they apply a current across the resistor under test, and they measure the voltage across the unknown resistor. So you see that no matter what you are measuring (volts, ohms, amps, etc.) it comes down to some voltage that is converted from analog to digital (A to D) and shown on the display.