I do computer software for a living and have done for the past 35 years. I learned about things like low level memory handling and such because back when I started, those were things you had to do yourself --
Back in the mid-80's I spent years working on an 805x based telephone device. As I recall I had 256 bytes of e-squared and perhaps twice that much ram. Or perhaps it was the other way around. The guy who did the hardware and early software found a company who took our voice prompts and converted them to data we would then play back via a resistor network. Damn clever!
Wow. I'm impressed.
He and the voice prompt guys were soon gone, and as we hadn't paid north of $50K for their proprietary software...I was left to splice and rearrange pieces of voice data to make completely different messages was we evolved the basic design into several different and unrelated products.
Those were the days!
They were indeed! Optimization seems to be something of a lost art, at least in the software world, these days. The capabilities of systems today are so high that few people think about minimizing the compute footprint of what they're writing. There are some notable exceptions, e.g. games, but I think even those are getting to the point where optimization of anything is the exception and not the rule.
Honestly, I can't entirely fault people for that. Optimization takes time, and that time might easily be best spent elsewhere. The problem is that when optimization is something that is rarely done, the techniques of optimization end up being forgotten (or, worse, never learned in the first place!), and it gets to the point where optimization can no longer happen even when it is clearly warranted.
What I understand least is the hardware end. My knowledge of hardware cpu/memory architectures is decades out of date, so I can't imagine how one might design such a highly optimized system. It's pretty amazing that we can in any way afford such capable instruments; did the price/performance criteria force a hardware design that somehow causes these buffers to be lost? I have no way of knowing.
I started in on a EE track before I finally switched to CS way back in the day (I switched because I found myself playing around with computers in my spare time, and figured that it was an indication of where my primary interests were -- and if that's where my primary interests were, it's probably what I should build my career on. Best decision I ever made). But I never entirely forgot about hardware and how it's constructed, and never entirely lost my interests in hardware.
CPUs today are incredibly complex beasts with all sorts of crazy optimization mechanisms (pipelining, lookahead, branch prediction, speculative execution, etc.). But even then, the fundamentals haven't really changed all that much.
As regards the Siglent scopes, it's possible that changes to things that have to look inside a capture somehow demand that the history be cleared. If that's truly the case, then there's no solving the problem of the history being cleared upon things like movement of tracking cursors while acquisition is happening, or addition/removal of measurements, etc. It would be surprising, to say the least, for that to be the case. But it's not entirely beyond the realm of possibility.
From a family perspective, wouldn't the SDS1K class different from the SDS2K class?
Yes and no. With respect to some of the details, certainly. But there's an
enormous amount of effort that goes into the design of one of these scopes. You don't just throw away an architecture, particularly one that works well for you, and design something from scratch unless you've no real choice (the market can sometimes demand this, but it's relatively rare). Siglent has an enormous investment in their architecture, both with respect to the hardware and with respect to the software. And those two things (hardware and software) are fairly tightly coupled. You can craft the software in a somewhat hardware-independent manner, via various abstraction techniques, and it's certainly in their best interests to do so to the degree reasonably possible. But at some point the software has to interact with the hardware, and it's at that point that hardware dependence within the software will appear.
And note that it's not uncommon for abstraction methods within software to incur a performance cost, which of course goes back to the question of optimization. Sometimes you have to choose between greater hardware dependence within your software and poorer performance of a more abstract approach.
That said, rf-loop has, as I recall, provided things like block diagrams and such showing how the scope does its thing
Thanks, I'll check that out.
Well, as I said afterwards in a subsequent edit, I don't actually think he has block diagrams and such of how the scope's acquisition and storage mechanisms work, at least not enough to prove revealing. But if he does, then I'd certainly be interested in seeing them.