What's in a tool?
I've written a raytracer – or, to be precise, a mixed method raycaster – for rendering ball and ball-and-stick atomic models containing anything from just a few atoms, to hundreds of millions of atoms, for producing quality animation for events in simulated molecular systems. You know, the kind of size and complexity that gets your Nvidia card apoplexy. Because of the algorithmic complexity, I had to go
back to the underlying math, ten years ago, to get things to work reliably. As I was using an old desktop machine with just a couple of gigs of RAM at the time, I even had to use memory-mapped files for accessing the data (if you know what
MAP_NORESERVE does, you'll probably understand how). (That entire tale, from combining different rasterization methods to reduce the number of objects tested for each ray, to consideration of information density in visualization and weirdnesses in human perception and exploiting them for conveying information, and to producing publication-quality graphics via low-resolution pixel images underneath raycast-solved vector outlines, is a funky tale in an of itself: I learned
a lot there. Then again, I actually worked it all out, instead of using existing toolkits and padding myself on the back for how good a "programmer" I am. I didn't do it because I knew how to do it, I did it to learn how to do it.)
I do that kind of low-level code-mongery all the time, and for that, C seems most suitable. Raytracing has long since passed from the pure mathematics of descriptive geometry to emphasizing the features of human perception (and exploiting its limitations), so reproducing early simple examples by applying extremely powerful existing libraries is a good exercise, but nothing impressive.
Back to topic at hand. Why C?
The reason you do not want to use C++ for low-level cross-language libraries, is the risk of incompatible dependencies between different versions of the C++ runtime. In theory, the dynamic linkers should handle it without issies, but reality is imperfect. The nasty situation occurs when application A written in some other language than C++ needs libraries B and C, both of which depend on C++, but are compiled against different versions of the C++ runtime.
A lot of microcontroller development now uses some subset of C++ (or even G++, the GNU C++ variant provided by GCC). While standard C has explicitly defined the freestanding environment (used in kernels and microcontrollers, when the standard C library and associated services are not available), the subset of C++ used varies, and is rarely if ever documented explicitly. They usually do not support exceptions, but other
verboten C++ features vary a lot.
Is it still C++ if one is supposed to keep to a strictly undocumented subset of its features? Not just avoiding the standard library, but core features of the language like exceptions? At least for C it (the freestanding environment) is well documented; for C++, almost all of it is
implementation defined, and varies from implementation to implementation. Can you still call it C++, if you need to test for support of each feature separately at build time?
To me, it only matters when this kind of discussion raises its head. Whenever you see someone ridiculing others for using a tool they themselves cannot wield efficiently, there is stupidity afoot.
Consider this:
You are on a walk along a rocky river during one summer, when you come across a guy, obviously not an outdoorsman, with pasty skin not seen enough sunlight, now slightly burning, banging flint with a rock, obviously trying to recreate the way stone-age humans made their flint axes thousands if not tens of thousands of years ago.
You smirk to yourself, and walk past without saying a word.
What you do not know, is that that guy is a double PHD, expert in atomic force microscopy and nanorheometry, and is spending their summer day in getting some real-world practice in knapping flint. Working at the scale of individual atoms, those materials science devices need sharp-tipped probes, preferably single atoms wide at their tip, that can withstand the vibration and movement close to the materials being investigated; and even today, knapping flint -- robotically, of course, nowadays -- is the best, most repeatable way we have of producing such minuscule tips.
What to you was an idiot banging rocks together, was actually somebody who understands reality fundamentally better than you.
Why didn't you know this? Because Dunning-Kruger; and because tools are tools, and there are some that bridge a stone age human and a leading materials scientist of 2020s.
While C really isn't that good of a language, we don't have anything better in its niche – low-level libraries with bindings to multiple languages, and so on. Anything that has a significant runtime or significant standard library won't cut it. It's just a tool most suitable for its particular kinds of nails.
So, the simple answer is
"because there are still problems for which C is the best tool".
For right now, I like doing math crunchy stuff in C (i.e., low-level code), and the user interface in Python. I like the idea of end-users being able to modify the user interface if needed.
I don't touch Perl due to old battle scars. Not really Perl's fault, I just had to work with a code base that was full of copy-paste-modify-slightly almost duplicates, critical to my users... You know the type: instead of fixing existing bugs, the functions are copied, the copy fixed, and only (some) future users using the fixed version, some the older version. (This was two decades ago now, and Perl has evolved a lot since then, so I really should take a look again. It's just that those scars itch easily.)
For quite a few years, I wrote lots and lots of PHP code. Usually, the most critical part was to ensure none of the admins with access to the servers enabled "magic quotes" or other PHP misfeatures that gave it its reputation.. You can write quite secure code even in PHP.
Fortran is still common in HPC, especially physics and chemistry simulations. While newer tools tend to be written in C or C++, if you want to duplicate old published results, you need to use that same code, too. Sometimes that is very important, if you want to know exactly why your results differ from previously published similar-but-not-exactly-the-same results: you add your new model details to old code, and use the old code to compare the results for both cases.
The thing that annoys me most about some programmers, is that so many of them write code in the first language they learned, in every language they purport to know.
This is nothing new. There was a saying already decades ago, "one can write FORTRAN code in any programming language", that describes this well.
In my opinion, that's not "knowing" anything, it is just mechanical translation. It is like claiming you understand something, because you can recite a mathematical formula that in some situations describes its behaviour.
You could even say that some of the above examples of C code fall into this category, because they show code that would cause anybody who is familiar with existing C codebases to raise their eyebrows. It is like saying that English is a horribly bad language, because in it "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo." is a well-formed valid sentence.
Every programming language has some core beauty, some idea, some paradigm, that makes it efficient and useful (according to some metric -- it is easier to just call it beauty) for some tasks. It is not important or useful to be able to mechanically translate code from one language to others; the important and useful part is understanding that beauty, and integrating it into your toolkit. And then, being able to choose the appropriate tool from your toolkit to whatever problem you have at hand.