Ofcourse you need to hire better people. Maybe even get some people from A-brand firms that know how to build good test equipment. But these aren't cheap.
They're not cheap, but I dare say they're not so expensive as to demand that the company get public funding! I expect we're talking about $250K/year per top-level person. That's just my guess. It may be that people who are familiar with test equipment firmware are considerably more expensive than that. Can't say on that. But the nature of the problem in test equipment is that you have what amounts to a real-time system underneath with a somewhat-real-time user interface on top, all rolled into one.
As I mentioned, the primary issue is with design. The engineering itself has to be solid from the ground up. If you've got a bad design then you can do only so much with it, after which the amount of effort to improve it further starts to exceed the amount of effort that would be needed to do a ground-up rewrite.
This is where you should be spending a significant portion of your money, but the amount of money we're talking about here isn't anything like the 1.2
billion Yuan (nearly $200
million dollars) raised in Siglent's IPO (
https://www.crunchbase.com/ipo/siglent-technologies-ipo--44da7c8d).
At the moment it seems Siglent has several outside beta-testers to do some firmware testing. I don't have the feeling these people are getting paid a lot.
People who work on open source projects often don't get paid at all. And yet, they volunteer their time and effort anyway, and the end result is often of higher quality than commercial efforts (consider, for example, the rise of Linux and its tool chain, which are actually separate, independent things but which both are a considerable improvement over previous commercial Unix offerings).
If someone
is being paid officially for their efforts, then there is usually some relationship between the quality of the work and what they're being paid for it. But that tends to happen because what they're doing for their job isn't necessarily what they're primarily interested in, and even if it is, the
business itself winds up placing constraints on them that prevent them from doing the job with the quality they'd prefer to do it. I've seen this countless times over the years, where paid software engineers were prevented from developing their software
properly because the beancounters insisted that time to release was more important than the quality of the release. It doesn't help that some of the in-vogue development processes (e.g., "Agile/Scrum", wherein development happens in fixed-time "sprints", the primary focus is on iterative development rather than proper up-front engineering, and the assumption is that proper quality and hard deadlines aren't conflicting things) effectively sabotage quality while claiming to improve it.
The beta testers you want the most are the ones who would actually use the instrument and its firmware to do real-world things. I can't count how many times I've seen customers break software in ways that QA simply didn't catch because QA simply didn't account for the specific use case, or didn't have the proper test setup, or a myriad of other things. There's a lesson to be learned from all that:
artificial test cases are no substitute for real world testing.
If Siglent is smart, then their beta testers are actually people who intend to use the final firmware for doing real-world things, and are doing their testing while doing real-world things. Their alpha testers will be in-house people, and they'll be tasked with developing and implementing automatic and repeatable functional testing.
User interfaces bring an entirely different set of needs into the mix. You need people who are willing to try different user interface approaches in order to get a sense of what works better, more efficiently, more clearly, etc. Obviously some of these people should be your own in-house hardware engineers who use test equipment for their jobs. This means, basically, an "eat your own dog food" approach. And it should be clear that much of the testing that goes on at all levels should be on the part of such people, with the understanding that at the end of they day they must still be productive. If your product is so broken that such people can't use it then it's time to go back to the drawing board.
The whole testing activity needs to be organised in-house but that also requires having people on the payroll that actually know how the equipment should work (what users expect). One of the statements that made me chuckle recently: the new SDS5k scope works great with a mouse. FFS it is a touch screen scope!
Why would the fact that it works with a mouse be concerning at all? It turns out that it's actually rather useful for it to work with a mouse. Consider, for instance, the case where the scope is near the rear of the bench. If you're going to manipulate it, you need to reach over everything on the bench in order to get at it. With the mouse (particularly a wireless mouse), you don't need to do any such thing. The mouse can be in easy reach and you can use it to manipulate the scope while also being able to manipulate the probes with your other hand, without the scope itself needing to be within easy reach. A mouse takes up
much less space than the scope does, and the mouse doesn't have to be positioned so that it's easily visible like the scope needs to be.
So as much as you might malign the use of a mouse with these things, it turns out to be a very useful capability.