In my view the whole argument is ridiculous. If the supplier database is using pF, and the mfg is using pF, why does the person drawing the schematic feel the need to us nF for any other reason than to be difficult and/or elitist.
Maybe because the idea of having prefixes in the first place is to make the numbers more readable to humans? Also, a few US people in this thread has already stated that they prefer to use nF rather than to skip it, so it is not just us strange Europeans (and maybe people from other parts of the world) doing it.
I personally have no trouble understanding and using nF and freely converting it to and from pF or µF when needed, but I am just curious as to what lead many Americans to avoid nF. There seems to be some sort of historical reason for it, but I do not think we have found the real answer in this thread yet.
It's just as easy to type a "p" or "u" as it is an "n", and only opens the door for the possibility of one more layer of mistakes in the design and mfg'g process. Whereas the industry converting over to nF would be a project of herculean proportions.
Well, I think it is easier and less error prone to interpret e.g. 10 nF than 10000 pF. Heck, why not skip prefixes all together if they are error prone. How about 0.00000001 F and 0.000000001 F. Much easier than keeping track of what those pesky prefixes mean and how they relate to each other right? ;-)
No, I am not asking the US electronics industry to change its minds regarding this. I am just curious and trying to figure out whether the unreferenced Wikipedia statement is true (it seems to be to a large extent) and in if so, what the original reason for it is (so far I am almost as clueless about this as when I started this thread).
Per