You do have the occasional comment about certain models, which reading further indicates that their popularity is due to them exceeding their specifications by a good margin.
True, but there are two important points that keep getting lost that I will continue to hammer on:
1. When tested at the specified calibration temperature range, typically 23C +/- 1C, a meter absolutely needs to exceed its specifications by a significant margin to ensure that it will be in spec over the full specified temperature range and with the stated confidence interval for the specified calibration interval. So if a meter has a "1-year" spec that adds up to say 15 counts at some specified calibration point with a 99% confidence interval, statistically the typical meter will be off less than 5 counts--assuming a standard distribution and yada yada. There can be other factors involved in a stated tolerance spec as well, but that would be a whole book.
2. You cannot assume that just because the meter is very close--or even exact--at the calibration points and at the calibration temperature, that it is somehow guaranteed to exceed its published specification at some other voltage or some other temperature. You would have to do a very extensive characterization of your own to make that claim. In cases where that is done, it is typically done at some single test point for a particular purpose (DMM) or is an instrument with only one function (10V reference, standard resistor, etc).
A calibration certificate for one or the other of those meters indicating that it was exactly correct at all of its test points (the best possible result) actually doesn't prove much, especially in light of the OP's stated purpose.