My opinion is not more valuable than yours. There's nothing in what you say I disagree with. But, the key point I make relates to "20 years now". You're familiar with Pascal. I'd like to see people not already familiar with it show some enthusiasm. That's where my doubts lie.
I programmed in Pascal for most of the 80s (Apple ][ UCSD, PDP-11 NBS Pascal and OMSI Pascal, VAX Pascal, Macintosh Object Pascal). I switched to C++ in 1989 and have never gone back. Pascal vs C is a non-issue, they both do the same thing. Object Pascal is close to equivalent to Objective C (not quite). C++ is in a league of its own.
By far the biggest practical advantage the C family has over Pascal for real world work is the powerful (but horrific!) preprocessor. Which you could steal and use as an external tool with Pascal code too. "Real" Pascals have some kind of conditional compilation, but it's not as powerful.
The debate over compiled vs interpreted is a perennial Internet favorite. Yes, compiled is faster and that too in my view is a 20 year old debate. It is much easier to find a bigger device now and that makes the need to squeeze code to the max less often necessary.
More like 50 or 60 years old, and predating the internet by a long way!
Right from the start, computers have been much faster than I/O, so almost any program that is dominated by I/O could reasonably be written in an interpreted language. That includes the vast majority of business software, but not scientific.
Early computers had really awful instruction sets and very limited memory. The simple compilers they could run generated huge and bad code, and assembler often wasn't much better. Well designed bytecode was much more compact, and a program didn't have to be very big to get back the space overhead of the interpreter.
Mainframes eventually got nice instruction sets (IBM 360 is pretty good, even by modern standards), but then early minicomputers, early microprocessors after them, had really shit instruction sets. It's hard enough at first to fit in enough circuitry to make something that works at *all*. In microprocessors this was the case up to and including the Z80 and 6502.
A great example of the utility of interpreters on such crude machines is Wozniak's "Sweet16" for the Apple ][. It is implemented in 372 bytes of 6502 code in ROM and provides a 16 bit virtual machine with 16 mostly 1 byte instructions and 16 registers (stored in addresses 0x0000 - 0x001F). No other memory is needed for the interpreter.
Wozniak provides an example of Sweet16 interacting with native code:
300 B9 00 02 LDA IN,Y ;get a char
303 C9 CD CMP #"M" ;"M" for move
305 D0 09 BNE NOMOVE ;No. Skip move
307 20 89 F6 JSR SW16 ;Yes, call SWEET 16
30A 41 MLOOP LD @R1 ;R1 holds source
30B 52 ST @R2 ;R2 holds dest. addr.
30C F3 DCR R3 ;Decr. length
30D 07 FB BNZ MLOOP ;Loop until done
30F 00 RTN ;Return to 6502 mode.
310 C9 C5 NOMOVE CMP #"E" ;"E" char?
312 D0 13 BEQ EXIT ;Yes, exit
314 C8 INY ;No, cont.
From 30A to 30F is 6 bytes of Sweet16 interpreted code. The rest is 6502 code. Without Sweet16 a direct translation to 6502 of the copy loop would be something like this (using the same memory locations for src, dst and len as the Sweet16 code does):
LDY #0
L0: LDA ($02),Y
STA ($04),Y
INC $02
BNE .+2
INC $03
INC $04
BNE .+2
INC $05
LDA $06
BNE .+2
DEC $07
DEC $06
LDA $06
ORA $07
BNE L0
This is 32 bytes of code instead of 6 (really 9, including 3 bytes to call Sweet16). The native code runs about ten times faster than the Sweet16. This is often not important for most of the program.
(You can do better than this by keeping the low byte of length in the Y register and not actually modifying the low byte of the src and dst pointers. But not much better. The fastest solution requires self-modifying code)
Modern instruction sets allow much more compact code, with the current champions being Thumb2 and RISCV, and i386, MSP430, SuperH not that far behind. Over a whole program (not just one isolated loop), Thumb2 is about as compact as any interpreted bytecode, while running at native speed.