agreed on not using a debugger is a required skill. That is really good point .
I would consider that skill to be after proficiency of source level debugging.
Yes. Interactive debugging is mostly for beginners who don't understand the basics of how a CPU and programs work.
There's really two situations
1) code not functioning because you wrote it wrong- Source level debugger is a huge time saver. That's what I see missing from Arduino.
This basically NEVER happens to me, for many decades.
It's not because I don't make mistakes. It's because I don't write 1000 lines of code in a frenzy and then try to "debug" it. I write 5 or 10 lines of code, and then test it. If it breaks then I already know, ±5 lines of code, where the problem is.
If I'm working on a tricky algorithm then I develop it on a real computer, where I can run billions of test cases quickly, or generate tens of GB of debug printf output and then write little Perl scripts to look for problems. Or, in the worst case, use a source debugger, but a printf or assert between every single line of code is better than single-stepping or even setting breakpoints in a debugger. See "Perl", above. Sanity checks written in the code run at the speed of code. Conditional breakpoints in a debugger run a million times slower. OK, maybe a thousand.
Single-stepping didn't make sense on a CPU running 400k instruction per second in the 1970s, it sure as heck doesn't make sense on a modern microcontroller running 20 or 30 million instructions per second, let alone on a PC running 10 BILLION instructions per second, per core.
The only exception is if you're so fresh you don't yet understand CPU instructions, registers, PC, stack, loops, function calls etc.
If I'm working on something that can only be done on the real hardware -- setting up GPIO, or input and output from a UART, or some SPI or i2c communication -- then I write and test ONLY THAT.
Remember the old days before jtag etc. We used to toggle pins to see where we were in the program, hooked up to a 2 channel scope or logic probe, or if you were lucky, a 8 or 16 channel logic analyser. Then when we got hardware uarts, we could send a single character to a uart to inform where we were without side effects. Then when there was a bit more memory (maybe 128 bytes..) a push button could interrupt and halt / terminate and send to a uart a small trace dump out of memory to uart or a bit-bashed uart to a teleprinter in 5 bit baudot....
In the mid 2000s I was writing effectively embedded software on pre iOS/Android phones running the BREW operating system. There was no memory protection, no debugging. Not even a serial console. What you could do was draw something on the screen, maybe up in a corner to not interfere too much with using the app, or log printfs to a file in the onboard flash (which was never very large). And even with logging, when the program crashed you lost the last flash sector of printfs unless you closed the file and re-opened for append every time which is very very slow because you're writing a complete flash sector every time.
Fun times.
But we got the product (AlcheMo Java to native Arm AoT compiler) out and it was extremely solid.