There is no surprise that there are engineers quite content with C, or even committed to C.
When talking about low level programming languages –– which is what I understand 'a hardware "oriented" programming language' to mean ––, C is just the one with the best proven track record decades long. It isn't that great, it's just the 'benchmark' one for others, due to its widespread use and role in systems programming and embedded development.
For examples of nearly bug-free programs written in C in the systems programming domain, go check out
D. J. Bernstein's
djbdns,
qmail,
daemontools,
cdb. This is the guy behind
Curve25519, having released it in
2005.
Like it, dislike it, doesn't matter, C is just a tool. But as a tool, its features and track record are significant. So are its deficiencies, and that means any real effort to do better is valuable.
In comparison, C# is a managed language. .NET Micro requires at least 256k of RAM. .NET nanoFramework requires at least 64k of RAM, and runs on Cortex-M and RISC-V (ESP32-C3) cores. So, perhaps suitable for medium to large embedded devices, but decidedly unsuitable for small ARMs and anything less than 32-bit architecures.
Ada can be used to program AVR 8-bit microcontrollers (see AVR-Ada), but it is still relatively little used. One possible reason is that while GCC GNAT is GPL3+ licensed with a runtime library exception, AdaCore sells GNAT Pro, and the FSF/GCC GNAT is seen as "inferior", with the "proper" version being the sole product of a commercial company. (Or maybe that's just me.)
I get that some consider this pointless
No, that's not it at all. Not pointless, more like bass-ackwards. We want the results too, we just have seen your approach before leading to nowhere. We're trying to steer you to not repeat that, but actually produce something interesting.
If you start a language design from scratch, you must understand the amount of design choices already made for existing languages. The ones in languages that have survived use in anger, are the ones where the choices support a programming paradigm the users find intuitive and effective.
Why did DiTBHo not start from scratch, and instead pared down C to a subset with some changes and additions, to arrive at their my-C, designed for strictly controlled and enforced embedded use cases? Because they needed a tool fit for a purpose, and it was a straightforward way to achieve it. Results matter.
Why did SiliconWizard's
Design a better "C" thread 'not go anywhere'? It just sprawled around, with individual features and other languages discussed. In fact, it really showed how complicated and hard it is to do better than C from scratch; with other languages like
Ada discussed but nobody knowing exactly why they never got as much traction as C. Just consider
this post by brucehoult about midway in the thread, about how C with its warts and all still maps to different hardware so well.
Me, I have worked on replacing the standard C library with something better. Because the C standard defines
freestanding environment where the C standard library is not available in quite a detail –– unlike say C++, which also has the same concept, but leaves it basically completely up to implementations to define what it means ––, this is doable. I aim to fix many of the issues others have with C. With C23 around the corner, the one
change I think might actually make a difference is to arrays not decay to pointers, and instead
conceptually use arrays everywhere to describe memory ranges. Even just allowing type variation based on a later variable in the same argument list would make it possible to replace buffer overrun prone standard library functions with almost identical replacements, that would allow the C compiler to detect buffer under- and overruns at compile time. It would only be a small addition, perhaps a builtin, to make it possible to
prove via static analysis that all memory accesses are valid.
In other words, I'm looking to change the parts of C that hinder me or others, not start from scratch.
Am I a C fanboi? No. If you look at my posting history, you'll see that I actually recommend using an interpreted language, currently Python, for user interfaces (for multiple reasons).
I currently use C for some embedded (AVRs, mainly), and a mixed C/C++ freestanding environment for embedded ARM development; I also use POSIX C for systems programming in Linux (mostly on x86-64). (I sometimes do secure programming, dealing with privileges and capabilities; got some experience as a sysadmin at a couple of universities, and making customized access solutions for e.g. when you have a playground with many users at different privilege levels, and subsections with their own admins, including sub-websites open to the internet. It's not simple when you're responsible nothing leaks that shouldn't leak.)
Okay, so if we believe that a ground-up design from scratch is unlikely to lead to an actual project solving the underlying problems OP (Sherlock Holmes) wants to solve, what would?
Pick a language, and a compiler, you feel you can work with. It could be C, it could be Ada, it could be whatever you want. Obviously, it should have somewhat close syntax to what you prefer, but it doesn't have to be an exact match. I'll use C as the language example below for simplicity only; feel free to substitute it with something else.
Pick a problem, find languages that solve it better than C, or invent your own new solution. Trace it down to the generated machine code, and find a way to port it back to C, replacing the way C currently solves it.
Apply it in real life, writing real-world code that heavily uses that modified feature. Get other people to comment on it, and maybe even test it.
Find out if the replacement solution actually helps with real-world code. That often means getting an unsuspecting victim, and having them re-solve a problem using the modified feature, using only your documentation of the feature as a guide.
Keep a journal of your findings.
At some point, you find that you have enough of those new solutions to construct a completely new language. At this point, you can tweak the syntax to be more to your liking. Start writing your own compiler, but also document the language the compiler works with, precisely. As usual, something like ABNF is sufficient for syntax, but for the paradigm, the approach, I suggest writing additional documentation explaining your earlier findings, and the solution approach. Small examples are gold here. The idea is that other people, reading this additional documentation, can see how
you thought, so they can orient themselves to best use the new language.
Theory is nice, but practical reality always trumps theory. Just because the ABNF of a language looks nice, doesn't mean it is an effective language. As soon as you can compile running native binaries, start creating actual utilities –
sort,
grep,
bc for example –, and look at the machine code the compiler generates. Just because the code is nice and the abstractions just perfect, does not mean they are fit for generating machine code. Compare the machine code to what the original language and other languages produce, when optimizations are disabled (for a more sensible comparison).
During this process, do feel free to occasionally branch into designing your language from scratch. If you keep tabs on your designs as your understanding evolves, you'll understand viscerally what the 'amount of design choices' I wrote above really means. It can be overwhelming, if you think of it, but going at it systematically, piece by piece, with each design choice having an explanation/justification in your journal and/or documentation, it can be done, and done
better than what we have now.
Finally: I for one prefer passionate, honest, detailed posts over dispassionate politically correct smooth-talk.