A higher level language such as Swift or Lisp can further increase programmer productivity and program performance (from more efficient implementation of high level abstractions)
Yes; and for those learning C, exposing oneself to different languages can help understand abstractions and how abstractions differ between programming languages.
Furthermore, there are situations where a single programming language is not the most efficient approach. (I often mention I like to write UIs in high-level interpreted languages like Python, because that way the UI is most malleable to end user modification, and I can still keep the heavy computational core in C or C++ or perhaps some other systems programming language.)
There are good reasons why so many games and applications incorporate domain-specific languages (from Lua to Lisp to Python), and it is not just "because it lets us use cheaper developers for the unimportant stuff" –– sometimes it just makes new and worthwhile things possible. NPC logic is easier if you have abstractions to support behaviour creation, instead of lifting it directly from low-level arithmetic with few abstractions, like you'd do if you do it in C. Recent discussions on design software supporting arithmetic expressions, instead of just numerical constant is one too: you can implement a simple numerical processor, but if you instead embed a scripting language, you suddenly make things parametric and programmable. Of course the key is whether users need these or find them useful or not; a feature nobody uses nor needs is only a plus in the marketing wank. Seeing the need for another language is very difficult to see unless you have experience using them as an user in similar situations and have found their power first hand, or have some experience in using those different languages to solve different problems can have some grasp at the different abstractions they provide; and you notice your mind telling you "feature X of Y would be nice here". (Sometimes it is wrong, though; mine is, at least. It is not an oracle, just bubbles up ideas to test/check/verify.)
I myself use C for freestanding and systems programming. I can do full graphical UIs (and have used GTK+ for this) in C, but it is not a very good fit; I only do so if I am resource-constrained and higher-level abstractions cannot perform sufficiently well on a given hardware. Computational power growth in the last couple of decades means even the cheap SBCs have enough memory and CPU power so that just doesn't happen anymore, so I don't. Using a language just because it is the one you know is not really a sensible way to pick a language, unless you are doing it for learning purposes or fun more than any other long-term reason. Besides, if you use C++, you can use Qt even on top of a raw framebuffer on an embedded machine.
Quite a lot of high-performance computing (often written in C, or a subset of C++) nowadays embeds CUDA or OpenCL code. On the graphics side, we have HLSL and GLSL (Direct3D and OpenGL shader programming languages), and so on. Thus, sometimes an "embedded" or domain-specific language is a compiled, low-level one, too. There are counter-efforts like SYCL, that try to unify these so that a single compiler can handle all in the name of
programming productivity, but I'm not convinced: there will always be cases where specialized, purpose-designed tools beat the generic ones, no matter how powerful its abstractions, for the very simple reason that some abstractions are contradictory so a single tool can never hold them all. I think of the history of PHP as an example why trying to do that – be everything and all things for everybody – backfires.
If someone tells you that language X is the only one you ever need; I suggest you think of it the same way you would if they had told you foodstuff Y is the only one you should ever eat. Even if they were technically correct, and I don't think they are or ever will be, it'd be rather dull and constricting.
I am following
Rust with interest. The way its "borrow checker" operates when it constructs "pointers" to provide/enforce its safety guarantees, seem similar/compatible to the approaches I've used in C for a couple of decades, and described here and in the other pointer thread. No, I'm not saying that shows I'm smart; I'm only saying one reason I'm interested is because I can see useful common ground to build on. I do like the idea of people trying to create something better, to avoid pitfalls found by earlier effort, while reaching for better heights. But, I haven't an opinion on Rust overall yet, not even on things like adding Linux kernel support for writing (parts of) it in Rust. Could be good, could be irrelevant, could be bad; I don't know yet. Only interested.
Comparing to the C standard, well, I just haven't believed the standard writers have C programmers' interests anywhere near the top of their priorities for well over a decade now, because of the things they have concentrated on and pushed forward, and things they have completely failed to try and address, so I am slowly moving on. For the longest time, POSIX (at the systems programming level, which is what I've done most of) offset/overrode any issues I could have had with the C standard, but now that "almost-POSIX" environments like WSL are cropping up again, it too may be going the way of the Dodo, due to confusion and frustration engendered in the developers. Reality wins over theory and texts, but if reality becomes unreliable or chaotic, developers tend to move on. This is not fast, however; I'm talking about one or two decades here, not next year.
None of this should alarm a new programmer. Their experience will be different to mine; just take note, observe, check/verify, and decide for oneself.