Letting a few people do random testing isn't software testing but stabbing around in the dark.
That sentence assumes that the few people random testing is all there is. I'd think they do much more (still not necessarily enough).
When I do an oscilloscope review I spend about 20 to 30 hours on testing and I'm not even close to covering 10% of all features. Real software testing means going through each and every function & feature for every possible permutation according to a test plan. For a modern oscilloscope this will take at least 3 to 4 weeks fulltime and it needs to be repeated for every release. And even then the testers won't be able to catch all bugs (which is where beta-testers come in). Test plans usually evolve over time to also catch the more elusive bugs. Software testing is expensive & time consuming and it is tempting to skip it but it will come back and haunt you if you do skip software testing.
The last sentence I can agree 90%. It is often more expensive to skip some testing, yet in many cases much cheaper to do risk management, skip some testing and do few fixes afterwards. It is details of where one can vs. should not skip some testing where I apparently disagree.
The need for full multi-week (or in our case "just" few days) testing only applies when there is either no previous testing done at all (like the situation for an external reviewer, or first release), or when the system is critical (say, space ships or nuclear power facility or such), or if risk management indicates so (like typically for so called "major releases" which have likely collected many changes that have inter-dependencies). As I mentioned before, in practical projects, simple fixes that can be seen clearly to be independent from the rest of the system do not create a need to make full 100% all permutations included -type of testing. Doing full tests for any tiny change would be like shooting a fly with a tank, there is simply no sanity in doing do so in most software projects, including oscilloscopes. (Unless that scope goes into space/nuclear power facility, but in that case, the responsibility and coverage of testing is managed by that project, not by the scope manufacturer).
Sure, sometimes a developer makes a mistake and a "clearly simple" fix ends up having side-effects after all, but these are (at least for us) very rare, and typically are not a disaster even if they slip through. And even when we know a change can have wider effects we still limit testing, simply for keeping the costs sane. Much cheaper to have few micro-level releases afterwards (typically come out in few days between each after a major release) than do the 100% testing beforehand.
That is simple common sense and economic thinking. I'm not the budget manager, and I'm actually one of the most testing-demanding coders in our sub-organization, but even then, I do apply some common sense when deciding testing coverage (per fix and per release).
We do run
automated tests (unit tests and some integration tests) on every release, they only take about 2 minutes of work to start and a 50 minutes coffee-break. But those tests won't really cover a lot, especially not UI.
And in the end, it is the customers that define the level of testing they want. If they want 100% testing for every little thing, they need to pay for it. None of our customers have been ready to do so, not even close. Siglent/Rigol customers seem to be typically even less willing to do so.
In short, the "short" time between versions is not (necessarily) a problem here, even if they had been public releases. I'd be fine with even public releases with just one day in between, if the latter release's changelog states merely "changed words chanel->channel, current->selected".
My negative "feelings" towards Siglent's software dev side has grown mostly from them (previously) letting through many bugs that are so obvious to detect that even quick (thus cheap) testing would have caught them, and many of them being so simple (thus cheap) to fix that there really was no excuse for those bugs ever to be seen in public. But all this external beta-testing stuff seems to indicate Siglent is growing up.