It's not really the bug that is a concern, it's the software testing coming across as weak. Autoranging is not rocket science. You flowchart and test at the boundary conditions such as a range change, and a range change gone wrong. And this issue is only a problem for people working in the 600 ohm world it seems.
I think you are underestimating the complexity of that simple statement. I tested my meter, and I found the following:
- My meter ranges up when the 66000 count threshold is reached
- My meter ranged down when the 61000 count threshold is reached
- My meter shows 6xxxx when the resistor is applied above the 63100 count reading
- My meter shows 06xxx when a resistor is applied below the 63100 count reading
None of these values correlate with the other users "magic values" that cause this issue which range from 65000-67000 counts. Now which "boundary condition" should they be testing to?
I expect that this bug is probably related to some perfect combination of hardware tolerance and firmware logic. It is highly probable that they did test the expected boundaries but the hardware in the test meters did not hit the magic values to cause this issue. Additionally, it is possible that they did test what they expected to be the boundaries, but the boundary might be slightly different than expected due to some tolerance related variance.
Yes, a company should test the boundaries. The problem is it is very likely that the test engineers performed extensive testing, but they missed that one magic permutation that caused some unfortunate bug to exist.
There is no complex product in existence that is without bugs. The important part is does the device have a bug that significantly affects it fit for use and end user satisfaction. Equally important (as I have said before) is how the company responds and what they do when a bug is discovered.