I do see your point. I use percent as percent of reading as typically the reading is end result, and likely displayed somewhere for use; whereas FS may not be remembered (if different dividers/multipliers deployed via a selection switch.) So while %FS is more technically definable, %reading is more "user-friendly".
The thing is, the error rising to infinity is purely an artefact of your definition of error, it'll happen with any ADC. In fact, the same phenomenon will occur with all sorts of things people trust, like a metal ruler. Use a ruler to measure 1.3mm, it's awfully difficult to get "10% accuracy". Use the same ruler to measure 130mm, it's incredibly easy to get "10% accuracy". This is a good principle to be aware of in general, of course, but presenting it as if its a special peculiarity of AVR ADCs is surely very confusing for people. I think most people, myself included, interpreted your message as "beware of microcontroller ADCs, they have this weird behaviour" as if it was somehow unique to the AVR or non-dedicated chips.
I'm struggling to think of an instrument I've encountered in my life that displays an error as +/- x % of measured value. I totally take your point that % of FS should be left on the ADC datasheet on not presented to consumers or end-users because it could be misunderstood, I agree with that, and I haven't seen any instruments do that either. But the solution is to present error in millivolts, milliamps, millimeters or widgets, not % of anything.
We are in total agreement of facts, but in disagreement of presentation. We are from very different frame of reference.
First, I did not (nor mean to) say or imply "beware of MCU" when it come to measurements. I was saying: "hey, there is this habitable zone and there is this wild country, stay in the sweet zone you are fine, but assume the whole thing being flat and everywhere (Vmin to Vmax) being same, you are off for a problem."
Second, I fully understand your "struggling to think of an instrument I've encountered in my life that displays an error as +/- x % of measured value." Makes perfect technology sense, but not user friendly except EE-kinds. FS%error+RD%error+CountError.
My experience is mostly outside the EE world, and few if any one think of error as %FS value. My experience has been mostly :
Read a number and add a tolerance (Value+-%error).
This is one part that is very confusing to people not of EE background. The way DMM defines accuracy was driving me nuts.
The car engine guy may think it makes perfect sense measuring speed in terms of % of max engine power, but some one walking into the shop would be totally confused.
It was precisely that difference in frame of reference that I translated the data to something I encounter more in my world:
LowLimit < Value +- X%error < HiLimit
This is an EE forum, of course everyone here should use an EE common frame of reference. But not everyone is there yet. The way the OP framed his original question, it looks like a post from someone like me - not already an EE expert, thus... Perhaps "more common ways" may be easier to understand.