Well I think that depends on your definition of the word "proper"
For sure. There wasn't a good emoticon to express it. I vacillated between "noooo" and "yes, but, butbut"
.
Basically, I too recommended
int +
size_t to Simon in a message or two earlier, for the very same reasons.
Yet, the simplistic char/short/int/long int (+ long long int for compilers that support it) selection rule has failed me before (and caused me pain in existing old projects). It works fine on any single architecture, except for the "if you choose a too small type for the native register size, you get extra machine code" detail, that might or might not matter.
However, when porting code to different architectures, with different type sizes (in particular, from an ILP32 to LP64, i.e. from 32-bit ints, longs and pointers to 32-bit ints but 64-bit longs and pointers), the types don't work that well. That's why the "new" integer types were added to C99, after all.
So, consider the "nooooo" as a quiet sob, not a yell.
The "new" integer types were
designed to solve the exact questions Simon has posed here. When moving from 16-bit to 32-bit architectures, and again when moving from 32-bit to 64-bit architectures, code that used the simple size-based rules became
inefficient. (The integer/pointer size disparities between e.g. ILP32 and LP64 did lead to bugs, and the
intptr_t/
uintptr_t types are designed to fix those now and in the future, but I'm talking about inefficiencies as in compiler generating unneeded extra code to follow the standard C integer type rules because it does not know the programmer intent. These "new" integer types "size behaviour" better expresses programmer intent, even if the hardware implementation is unchanged.)
In particular, with these "new" types, there is no reason why
int_fast16_t,
int16_t, or
int_least16_t should all/any be the same type: the first one is the one that is suitable for temporary variables and function parameters (i.e., is of native integer arithmetic size), second is exactly 16 bits, and the third is at least 16 bits, but the architecture can use a larger type if keeping it to just 16 bits would mean extra code (say, accessing the upper 16 bits of a 32-bit word would require bit shifting).
While there is nothing special in their hardware implementation, the interesting part in them is the rules on how their sizes are defined on different architectures, and how useful they can be -- for portable code. But few C programmers write truly portable code, so not many C programmers know this. (Which is kinda why I'm harping about it here, even this is just a thread where Simon is looking for quick hints on how to progress.)