Heh. Maybe it is just me.
C is NOT my primary language. So I see "d", and the first two thoughts that come to my mind is "double precision float". Or "decimal", which in the contexts I'm more used to, doesn't mean base 10 - it means a number stored much like a string, with no limit on the number of digits or precision. I know I can use "i" instead, and that makes more sense to me as an integer. But it seems like everyone else uses "d", so I feel obliged to use it as well. And then I don't mess with printf for a while, go back to old code and see "d", and get confused all over again. I try to figure it out without the reference chart, and it goes something like this:
Hmm, "d" is an integer, right? But wait, "u" is an unsigned integer, so that would make a signed integer "i', and that makes "d"...double? Arg, gotta check that chart again...
Seriously, WHY did they assign two letters to the same darn thing? I don't care what anyone says, this is NOT intuitive.
I'm also trying to code for portability to other hardware. Currently in the middle of the second port, this time from 16 to 32 bit architecture. Had the foresight to write it from the start using types like int16_t where appropriate, which I typedef'ed to I16 for conciseness. So most of it's been super easy. But I'm now getting some stack issues using printf. In the process of looking into it, I find something that says in this scenario, I really should have been writing:
printf("I have %05" PRIi16 " bananas", ...)
Which is no longer concise, like [rs20]'s original example. But at least it's totally unambiguous. And different enough that I no longer feel obliged to use "d", when I really want to use "i". Yet, if I want those bananas in hex, I have to do this:
printf("I have %05" PRIx16 " bananas", ...)
But what I'll end up doing is this:
printf("I have %05" PRIi16x " bananas", ...)
Remember I typedef'ed int16_t to I16? That means I'm going to instinctively want to put that "i" in there, right before the "16", every time. Which makes perfect sense, really, because IT IS STILL AN I16. That I'm asking for output in hex, has NOTHING to do with the input data type, and should NOT replace or require changes to that convention.
So I can fix this with more defines. Give myself "PRIi16x", or more likely "priI16X" which fits in better with my existing I16 convention, and whatever else I want to do that is comfortable and intuitive. In fact, by adding a preprocessing wrapper around printf, I could make it accept any of these:
printf("I have %05" priI16 priX " bananas", ...) // I16, uppercase hex
printf("I have %05" priX priI16 " bananas", ...) // I16, uppercase hex
printf("I have %05" priI16 prix " bananas", ...) // I16, lowercase hex
printf("I have %05" prix priI16 " bananas", ...) // I16, lowercase hex
Which is slightly less concise, but even more intuitive. Because then I don't even have to remember where in a naming convention the "x" is supposed to go. I will not be confused on whether the "x" replaces "I" or not, even if I have recently seen someone else using "PRIx16".
And should I ever actually need a hex floating-point number (though I can't imagine why), I don't have to remember that somehow becomes "A". What is that "A" supposed to stand for anyway? Arcane? Who thought that was even useful? A signed hex integer is less weird, yet apparently impossible... Seriously, the more I look at all this, the more arbitrary it seems.
Ok, rant mode off.
My intent in posting this question was that before doing my own defines and whatnot, I just wanted to see if anyone else had come up with something similar, but possibly more elegant.