Surprisingly, there is quite a lot of bare-metal C code, that is portable between wildly different architectures. And I'm not talking only about the Linux kernel, either -- although it is probably the most well known example.
Any embedded device that has a product lifetime longer than say 5 years should consider the portability aspect, really; it may make a significant difference to the BOM cost, later on. The portability aspect is the only thing that can save on the software development cost. If you look at many embedded devices, like routers, you'll notice they can completely switch hardware architectures within the same product, between versions/revisions. You don't do that, unless (most of) the same code can work on both.
Others who work on actual commercial embedded products could chime in, as I don't, but even using an interface shim layer using the custom types I mentioned on top of existing HALs, can make the actual product much easier to port between vendors (and their HALs).
In particular, for libraries and HALs, one must remember that it does not matter if you have one or more implementation, as long as the user-developer facing side is the same across all of them; then, the user-developer does not even need to port their code between the architectures, as it should Just Work -- much like in the Arduino system. (Except that because Arduino folks did not consider the integer types, there is a lot of stuff that makes life hard for library writers, and compiled Arduino code less than optimal, particularly when comparing 8-bit AVR and 32-bit ARMs.)
A lot of what I have written here, Simon, is something to consider only, and perhaps let simmer; something you might recall when encountering a related problem. In particular, do feel comfortable using just size_t and int and unsigned int, because that's how most existing C code does it.
Besides, especially during the learning phase, it is most important to get stuff working, even if it is not that elegant/clean/optimal, as writing the code is just a small part of software engineering, and you need to have somewhat working code to get experience on the rest, especially testing, maintenance (and porting, yes), documentation, and so on.
Also, I've found out that if something turns out to be useful in the long term or in more than one environment, you do end up rewriting it, incorporating all the features (and dropping the unneeded ones) and details one has learned from experience. So, it is not "ugly" or "bad" to write code that you know is far from optimal! (Security, on the other hand, must be designed in to the software, and cannot be bolted on top afterwards.)
Indeed, one of the common programmer faults is premature optimization. Algorithmic and system-level optimizations always yield much better results than code-level optimizations, and my own experience says that one shouldn't bother code-level optimizations at all before the first rewrite; the actual use and testing of the "crude"/"naïve" version always teaches me so much about the actual human-scale problem/task at hand, that code optimization before that is usually just wasted time. There are exceptions, of course, but there is something in code-level optimization that tends to attract programmer minds, and being aware of it and that it doesn't matter much at all in real life, is kinda important.