Author Topic: Oscilliscope memory, type and why so small?  (Read 32030 times)

0 Members and 2 Guests are viewing this topic.

Online Someone

  • Super Contributor
  • ***
  • Posts: 5016
  • Country: au
    • send complaints here
Re: Oscilliscope memory, type and why so small?
« Reply #25 on: March 07, 2017, 02:15:51 am »
I actually do understand quite a few of the underlying limitations of memory, don't be so arrogant. This thread is more about how those limitations might prevent scopes from having plenty of memory.
So take a low end scope with 4 channels of 5GS/s, that needs a minimum uninterrupted bandwidth of 160Gb/s, there is no chance to pause or wait unless you add more buffers (fifos) to the system. Thats already at the bleeding edge of off the shelf systems:
https://en.wikipedia.org/wiki/List_of_device_bit_rates#Dynamic_random-access_memory
So its not even possible to deliver just the entry level performance people expect from a scope unless you add in a lot of extra hardware and/or specialised memory. Now try and secure a contract with the manufacturer for a 5-10 year part life, its not possible to compare test and measurement equipment to commodity PCs.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28111
  • Country: nl
    • NCT Developments
Re: Oscilliscope memory, type and why so small?
« Reply #26 on: March 07, 2017, 02:27:38 am »
I actually do understand quite a few of the underlying limitations of memory, don't be so arrogant. This thread is more about how those limitations might prevent scopes from having plenty of memory.
So take a low end scope with 4 channels of 5GS/s, that needs a minimum uninterrupted bandwidth of 160Gb/s, there is no chance to pause or wait unless you add more buffers (fifos) to the system. Thats already at the bleeding edge of off the shelf systems:
https://en.wikipedia.org/wiki/List_of_device_bit_rates#Dynamic_random-access_memory
So its not even possible to deliver just the entry level performance people expect from a scope unless you add in a lot of extra hardware and/or specialised memory. Now try and secure a contract with the manufacturer for a 5-10 year part life, its not possible to compare test and measurement equipment to commodity PCs.
The page you link to is about PC memory modules so not applicable for a digital oscilloscope design (and for PCs too which use wider busses as well to increase memory bandwidth. The Xeon E5 in my PC has a maximum DDR3 memory bandwidth of 68GB/s = 544Gb/s and uses 4 memory lanes to achieve that). In a DSO you can easely use much wider memory either by using an ASIC or FPGA. Even 40Gs/s isn't a problem for a DDR3 memory solution if you just make the bus wider. The real bottleneck is to do something useful with all that data like decoding and so on.
« Last Edit: March 07, 2017, 02:38:04 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 5016
  • Country: au
    • send complaints here
Re: Oscilliscope memory, type and why so small?
« Reply #27 on: March 07, 2017, 02:39:45 am »
I actually do understand quite a few of the underlying limitations of memory, don't be so arrogant. This thread is more about how those limitations might prevent scopes from having plenty of memory.
So take a low end scope with 4 channels of 5GS/s, that needs a minimum uninterrupted bandwidth of 160Gb/s, there is no chance to pause or wait unless you add more buffers (fifos) to the system. Thats already at the bleeding edge of off the shelf systems:
https://en.wikipedia.org/wiki/List_of_device_bit_rates#Dynamic_random-access_memory
So its not even possible to deliver just the entry level performance people expect from a scope unless you add in a lot of extra hardware and/or specialised memory. Now try and secure a contract with the manufacturer for a 5-10 year part life, its not possible to compare test and measurement equipment to commodity PCs.
The page you link to is about PC memory modules so not applicable for a digital oscilloscope design (and for PCs too which use wider busses as well to increase memory bandwidth). In a DSO you can easely use much wider memory either by using an ASIC or FPGA. Even 40Gs/s isn't a problem for a DDR3 memory solution if you just make the bus wider. The real bottleneck is to do something useful with all that data like decoding and so on.
Current commodity dimms are 64bit wide interfaces, so thats the example of just how far away PCs already are of course you can put the memory chips on board and have a bus width of any arbitrary size. But making the bus wide enough to absorb all the data is just the first of many problems, taking up a lot of board real estate, package pins, power, etc. Thats already required as a separate front end system for the Lecroy X-Stream style scope where you fifo that data and then read it out leisurely using commodity PC hardware (and all the benefits and drawbacks that entails). Memory is not something that is just added piecemeal to a scope, the incremental costs of adding more can be extremely high unlike the picture the OP is trying to paint.

As you say, its possible but then you still need to do something with these huge amounts of data in memory.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9940
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #28 on: March 07, 2017, 02:44:54 am »

It was just an example of a common signal of substantial length. For sure LAs are good for this particular case but they are not free of cost and are not generalist tools like o-scopes, so frequency of use is a factor.
As for the DS1054z; Some of the newest scopes are finally catching up, this is a good thing for sure, but to me it seems a few years late and there is still a large number that haven't caught up and some companies are spending money on marketing to push sub-par products rather than just engineering a better product. In my mind in 2017 memory depth shouldn't even be a purchase selection criteria outside of some very niche use cases.


It's a pretty unrealistic sample for a scope.  You would be twiddling the horizontal position knob for days.  The only hope would be to download the data to a PC and write some kind of script to wander through the samples.

So, how much memory do you think a scope should have?  If I run a slow time base, it takes a long time to fill the memory.  The good news is that it is possible to reduce memory depth.

I can't imagine a scenario where I would need more than 6 Mpts per channel.  Sure, if I was a big time developer with a $200k Keysight scope, my expectations might be different.  What I have is a $400 low end scope for hobby use.  And my Tek 485 with NO sample memory.

 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: Oscilliscope memory, type and why so small?
« Reply #29 on: March 07, 2017, 09:16:41 am »
I can't imagine a scenario where I would need more than 6 Mpts per channel.

My screen is really too low resolution for this kind of play but still:
CH1: 50MHz square
CH2: 100Hz 10% square pulses
2x25M main memory used.
4digit measurements + 1M FFT on both.
Both signals completely unrelated in phase, but triggered stable using rising edge correlation point (logic triggers).
UI is responsive, wfm (not UI!) refresh rate approx 1Hz (2xFFT introduces some processing load).


« Last Edit: March 07, 2017, 01:13:52 pm by MrW0lf »
 

Offline tautech

  • Super Contributor
  • ***
  • Posts: 29492
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: Oscilliscope memory, type and why so small?
« Reply #30 on: March 07, 2017, 09:19:43 am »
Now you're just showing off Wolfie.  :-DMM

Edit
Take care Fungus is watching.  ::)
Avid Rabid Hobbyist.
Some stuff seen @ Siglent HQ cannot be shared.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28111
  • Country: nl
    • NCT Developments
Re: Oscilliscope memory, type and why so small?
« Reply #31 on: March 07, 2017, 09:24:38 am »

It was just an example of a common signal of substantial length. For sure LAs are good for this particular case but they are not free of cost and are not generalist tools like o-scopes, so frequency of use is a factor.
As for the DS1054z; Some of the newest scopes are finally catching up, this is a good thing for sure, but to me it seems a few years late and there is still a large number that haven't caught up and some companies are spending money on marketing to push sub-par products rather than just engineering a better product. In my mind in 2017 memory depth shouldn't even be a purchase selection criteria outside of some very niche use cases.
It's a pretty unrealistic sample for a scope.  You would be twiddling the horizontal position knob for days.  The only hope would be to download the data to a PC and write some kind of script to wander through the samples.
That is why a good scope with deep memory should have a search function. And you are not twiddling knobs all day. I use deep memory all the time to combine various measurements in one acquisition. Being able to go back and forth between an overview of the signal and zooming in on details is a major time saver.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: Oscilliscope memory, type and why so small?
« Reply #32 on: March 07, 2017, 09:27:00 am »
Take care Fungus is watching.  ::)

No worries there, after getting a ban I'm full agreement with fact that his scope is best bang for the buck <1000$. Might even change to MrLamb soon ::)
« Last Edit: March 07, 2017, 09:33:52 am by MrW0lf »
 

Offline tautech

  • Super Contributor
  • ***
  • Posts: 29492
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: Oscilliscope memory, type and why so small?
« Reply #33 on: March 07, 2017, 09:38:44 am »
Take care Fungus is watching.  ::)

No worries there, after getting a ban I'm full agreement with fact that his scope is best bang for the buck <1000$. Might even change to MrLamb soon ::)
Some have no idea of how cheap that design is and are too blind to see.
I missed the fireworks, guess it was the week I went fishing, PM me with what went down if you like.
Avid Rabid Hobbyist.
Some stuff seen @ Siglent HQ cannot be shared.
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 5016
  • Country: au
    • send complaints here
Re: Oscilliscope memory, type and why so small?
« Reply #34 on: March 07, 2017, 09:58:15 am »
Take care Fungus is watching.  ::)

No worries there, after getting a ban I'm full agreement with fact that his scope is best bang for the buck <1000$. Might even change to MrLamb soon ::)
Some have no idea of how cheap that design is and are too blind to see.
I missed the fireworks, guess it was the week I went fishing, PM me with what went down if you like.
Dont worry, at its 1Hz update rate its 95% blind too ;)
 
The following users thanked this post: tautech

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 17242
  • Country: 00
Re: Oscilliscope memory, type and why so small?
« Reply #35 on: March 07, 2017, 09:59:25 am »
Now you're just showing off Wolfie.  :-DMM

Edit
Take care Fungus is watching.  ::)

I'm just bookmarking where he says 1Hz update is a "responsive UI".  :popcorn:
 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: Oscilliscope memory, type and why so small?
« Reply #36 on: March 07, 2017, 10:22:28 am »
I'm just bookmarking where he says 1Hz update is a "responsive UI".  :popcorn:

Wfm update rate is ~1Hz in this specific example, has nothing to do with UI control responsiveness (unlike on some DSOs). Also its largely dependent on logic trigger condition matching (edges must exactly match). I remind that signals are completely unrelated in phase. Im free to remove matching and analyze single shot or apply some other condition.

Overall think some limitations can be considered normal with using low end DSO in scenario reminiscent of Tek MDO4000C PC interface or large-screen LeCroy. It like whining that cheap 10digit counter takes seconds to stabilize... so... it does the job at given budget... :-//

Edit: I'm really trying to be MrLamb but still my kettle got a bit going :P What is "refresh rate" of doing raw data export from simple DSO & doing offline analysis? Or "refresh rate" of manual fiddling in STOP mode? :popcorn:
« Last Edit: March 07, 2017, 10:57:36 am by MrW0lf »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28111
  • Country: nl
    • NCT Developments
Re: Oscilliscope memory, type and why so small?
« Reply #37 on: March 07, 2017, 10:39:33 am »
Take care Fungus is watching.  ::)

No worries there, after getting a ban I'm full agreement with fact that his scope is best bang for the buck <1000$. Might even change to MrLamb soon ::)
Some have no idea of how cheap that design is and are too blind to see.
That is besides the point. For R&D work an oscilloscope which is geared towards data/signal analysis is very useful and it seems with Picoscope you get quite an extensive signal analysis package nowadays.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: MrW0lf

Offline capsicumTopic starter

  • Contributor
  • Posts: 34
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #38 on: March 07, 2017, 11:53:40 am »
 :scared: What is happening in my sweet sweet thread? (I only continue now for entertainment, I have sufficient answers.)

Anywho, back up a few and more or less on topic  :horse: : The main reasons that seem to be given for tiny amounts of memory(under one Msample per channel) are that the manufacturers can't just hotsnot some PC dimms onto the main board creating an instant new product and any optimization or engineering beyond that is just really hard work guyz. :phew:  And a large data capture might need to be dumped to a desktop :o next I'll be suggesting that photographers use photoshop rather than relying solely on in camera settings and youtube uploads don't always need to be raw live streams.
Also the ADC is magical contraption and thus its data stream can only be handled by special memory carved from unicorn teeth by virgins under the light of a November waxing moon. :-DD

So veering back off slightly, how many here have any idea how much data and time critical processing go into rendering a 3D game with multiple light sources and transparent windows, in HD 24bit(per pixel) color depth with better than 60hz refresh rate? Hint it isn't just displaying a simple bitmap, all the vectors need calculating, which items are in front/back from the viewer's perspective, shadow direction and intensity...

So take a low end scope with 4 channels of 5GS/s, that needs a minimum uninterrupted bandwidth of 160Gb/s, there is no chance to pause or wait unless you add more buffers (fifos) to the system. Thats already at the bleeding edge of off the shelf systems:
https://en.wikipedia.org/wiki/List_of_device_bit_rates#Dynamic_random-access_memory
A $250 PS4 (not even close to bleeding edge) has 176Gb/s bandwidth.

Scroll down that wiki page to the GPU RAM  modules and you will see speeds of several Terabits per second for 64 lanes [pins]
« Last Edit: March 07, 2017, 12:09:07 pm by capsicum »
 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 17242
  • Country: 00
Re: Oscilliscope memory, type and why so small?
« Reply #39 on: March 07, 2017, 12:11:30 pm »
A $250 PS4 (not even close to bleeding edge) has 176Gb/s bandwidth.

Yes, but every single PS4 customer wants it and is willing to pay.

Not many oscilloscope owners do so why should the manufacturers spend extra on something only a handful of customers would pay for?

Bottom line: The answer is "bean counting", not anything technical.
 

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 14020
  • Country: gb
    • Mike's Electric Stuff
Re: Oscilliscope memory, type and why so small?
« Reply #40 on: March 07, 2017, 01:29:13 pm »

A $250 PS4 (not even close to bleeding edge) has 176Gb/s bandwidth.

It's not all about bandwidth. That figure is probably peak burst, not sustained, and likely a very wide bus, which gets expensive to do in the sort of volumes test equipment is manufactured in.

Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28111
  • Country: nl
    • NCT Developments
Re: Oscilliscope memory, type and why so small?
« Reply #41 on: March 07, 2017, 04:58:38 pm »
In DDRx memory systems you can get very close to the maximum bandwidth because the chip consists of multiple memory banks. Each bank can be precharged, refreshed and read/written in sequential order. All in all DDRx memory is excellent for a DSO because you don't need random access.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 5016
  • Country: au
    • send complaints here
Re: Oscilliscope memory, type and why so small?
« Reply #42 on: March 07, 2017, 10:52:46 pm »
So veering back off slightly, how many here have any idea how much data and time critical processing go into rendering a 3D game with multiple light sources and transparent windows, in HD 24bit(per pixel) color depth with better than 60hz refresh rate? Hint it isn't just displaying a simple bitmap, all the vectors need calculating, which items are in front/back from the viewer's perspective, shadow direction and intensity...
A mass volume market that has spent 20 years iterating on their designed for purpose ASICs, yes it can compute a lot but it can't do either the computations required for this application or move the data in and out of the chip fast enough (you'd saturate the PCI interface with just the ADC data).

So take a low end scope with 4 channels of 5GS/s, that needs a minimum uninterrupted bandwidth of 160Gb/s, there is no chance to pause or wait unless you add more buffers (fifos) to the system. Thats already at the bleeding edge of off the shelf systems:
https://en.wikipedia.org/wiki/List_of_device_bit_rates#Dynamic_random-access_memory
A $250 PS4 (not even close to bleeding edge) has 176Gb/s bandwidth.

Scroll down that wiki page to the GPU RAM  modules and you will see speeds of several Terabits per second for 64 lanes [pins]
Again a mass market product where one of the obvious features for the consumer is more memory (better textures, more complex environments, etc etc) and it still doesn't have enough bandwidth to cover these sorts of use cases. Perhaps that Hybrid Memory Cube stacked ram technology might be something that will appear in scopes in the future, do you have a price on those parts?

A $250 PS4 (not even close to bleeding edge) has 176Gb/s bandwidth.
It's not all about bandwidth. That figure is probably peak burst, not sustained, and likely a very wide bus, which gets expensive to do in the sort of volumes test equipment is manufactured in.
And you can't compare the averaged bandwidth over a long period of time unless there are also FIFOs to cover the latency and gaps in the transfer, the acquisition memory makes just a small part of the oscilloscope as has been discussed many times before.
 

Offline kcbrown

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #43 on: March 09, 2017, 02:54:54 am »
So veering back off slightly, how many here have any idea how much data and time critical processing go into rendering a 3D game with multiple light sources and transparent windows, in HD 24bit(per pixel) color depth with better than 60hz refresh rate? Hint it isn't just displaying a simple bitmap, all the vectors need calculating, which items are in front/back from the viewer's perspective, shadow direction and intensity...

Yes.  And GPU vendors spend billions in order to design GPUs and card designs that are capable of all that.   Per year.  One example: https://ycharts.com/companies/NVDA/r_and_d_expense.   And trillions have gone into computing systems design for the purpose of optimizing computing hardware for the general computing use case.

The main problem is that the oscilloscope architecture has to be able to do continuous streaming of data to the RAM (or, at least, burst streaming with pauses for trigger re-arm) in such a way that it can't be interrupted by anything except a trigger re-arm sequence or the stop button.  You'd need to do reads during the period of time that the RAM isn't being written to, and you'd have to be able to do enough of them for long enough periods that you could successfully decimate the data fast enough for interactive display purposes.  The nature of the triggering system alone would probably demand a separate FIFO buffer large enough to accommodate whatever amount of data the triggering mechanism would need.  But since the scope exists to detect various conditions within a continuous signal, an engineering goal would be to minimize the trigger re-arm time, which is a goal which conflicts with the need to read from DRAM.

Even so, scope manufacturers are coming to terms with all this.  The Siglent SDS2000X series has 140M points of sample memory, as does the Rigol DS4000 series.

The point is that because the oscilloscope's data access patterns are so different from that required by most computing applications, including 3D graphics, the end result is that either the memory technology used has to be at least an order of magnitude faster than the design spec requires in order to accommodate an unoptimized access pattern quickly enough, or the scope manufacturers would have to spend a rough equivalent (within an order of magnitude, I'd think) of what GPU manufacturers spend in order to create designs that are sufficiently optimized to work with DDR3 memory and achieve the target acquisition and processing speeds.  And as everyday signals get higher in frequency, the demand for ever faster acquisition remains in place.  As such, the speed versus memory size tradeoff isn't going away anytime soon IMO.


Fortunately, all of this improves over time as the base technology improves, so while scope memory may remain a major selling point for the moment, I wouldn't count on that being the case for more than a few years (maybe two more generations) unless new and novel ways are devised to use that memory to good effect (a distinct possibility).
« Last Edit: March 09, 2017, 02:58:21 am by kcbrown »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28111
  • Country: nl
    • NCT Developments
Re: Oscilliscope memory, type and why so small?
« Reply #44 on: March 09, 2017, 03:10:19 am »
@kcbrown: you and some others are massively overestimating the problems. Streaming data from an ADC into memory is peanuts. Doing this while displaying many waveforms/s is also peanuts (hint: many if not all scopes use double buffering). I create very similar data acquisition systems for one of my customers.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline kcbrown

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #45 on: March 09, 2017, 03:29:39 am »
@kcbrown: you and some others are massively overestimating the problems. Streaming data from an ADC into memory is peanuts. Doing this while displaying many waveforms/s is also peanuts (hint: many if not all scopes use double buffering). I create very similar data acquisition systems for one of my customers.

Hmm...double buffering takes care of a lot of problems, I have to admit.  But the double buffering has to exist at the bus level, because you're fighting bus contention.   As long as that's the case, it should be fine.  Even segmented memory capture can bounce between the two banks to avoid contention. 

And I suppose as long as your trigger conditions can be limited to a fairly short amount of memory (enough that you can buffer it separately), you won't have trouble there, either. 

If this problem is as easy to deal with as you say, then why haven't manufacturers stepped up to the plate with massive capture memories based on DDR3 architectures?   The design only has to be done once, right?


Manufacturers are already using double buffering.   So what's the holdup for them?

I've learned that if a solution seems blindingly obvious to you (as this seems to be), and people aren't using it, it's due to one of three things:
  • You're a genius, and things that are simple and obvious to you just aren't to anyone else (this explanation is the least likely)
  • Those who are capable of implementing the solution are so stuck in their ways that they don't want to bother (this seems unlikely also, but possible)
  • There's a really good and non-obvious reason that the obvious solution is not being used.




Sent from my iPhone using Tapatalk
« Last Edit: March 09, 2017, 03:41:05 am by kcbrown »
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 5016
  • Country: au
    • send complaints here
Re: Oscilliscope memory, type and why so small?
« Reply #46 on: March 09, 2017, 08:31:40 am »
@kcbrown: you and some others are massively overestimating the problems. Streaming data from an ADC into memory is peanuts. Doing this while displaying many waveforms/s is also peanuts (hint: many if not all scopes use double buffering). I create very similar data acquisition systems for one of my customers.

Hmm...double buffering takes care of a lot of problems, I have to admit.  But the double buffering has to exist at the bus level, because you're fighting bus contention.   As long as that's the case, it should be fine.  Even segmented memory capture can bounce between the two banks to avoid contention. 

And I suppose as long as your trigger conditions can be limited to a fairly short amount of memory (enough that you can buffer it separately), you won't have trouble there, either. 

If this problem is as easy to deal with as you say, then why haven't manufacturers stepped up to the plate with massive capture memories based on DDR3 architectures?   The design only has to be done once, right?


Manufacturers are already using double buffering.   So what's the holdup for them?

I've learned that if a solution seems blindingly obvious to you (as this seems to be), and people aren't using it, it's due to one of three things:
  • You're a genius, and things that are simple and obvious to you just aren't to anyone else (this explanation is the least likely)
  • Those who are capable of implementing the solution are so stuck in their ways that they don't want to bother (this seems unlikely also, but possible)
  • There's a really good and non-obvious reason that the obvious solution is not being used.
There are a group of people on here who say its easy to do, but when pushed they distract you with endless other points:
https://www.eevblog.com/forum/testgear/100-000-waveformssec-versus-a-screen-refresh-rate-of-100-hz/?all
I'll agree with you that its not trivial, simple, or obvious. If there was a cheap way to do it then you could make very good money selling off the technology.

To get sustained gigapoints/second throughput to the screen (over multiple channels) requires hardware dedicated to doing this by way of an ASIC or large FPGA. This can be separate to the acquisition memory, or share it with double buffering. Streaming the 100+ Gb/s to ram is relatively simple and cheap compared to creating the 2d histogram at high throughput. As mentioned earlier in the thread there are scopes with deep memory and scopes with high throughput, there aren't scopes that do both simultaneously.
 

Offline kcbrown

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #47 on: March 09, 2017, 08:52:52 am »
I'll agree with you that its not trivial, simple, or obvious. If there was a cheap way to do it then you could make very good money selling off the technology.

Well, what is not obvious is why the "obvious" solution doesn't work.

We have three requirements:

  • Fast triggering
  • Fast continuous writes of sampled data
  • Sufficiently fast reads of sampled data to allow for decimation and other processing for display purposes.

The first is taken care of by an extremely fast FIFO.  The second and third are taken care of with double buffering, with writes to memory being accomplished through the FIFO.

So what's missing here?  What makes it impossible to achieve all three of those goals with DDR3 memory as the primary capture buffer?  Nothing says that DDR3 needs to be used for the high-speed FIFO or for the decimation target memory.


Quote
To get sustained gigapoints/second throughput to the screen (over multiple channels) requires hardware dedicated to doing this by way of an ASIC or large FPGA.

But that's just it: you don't have to get all the points to the screen.  You have to get a representation of them to the screen.  And you only have to do so at a rate of something like 30 times per second.  You can stream the data for this from primary RAM in the buffer that isn't being written to (double buffering makes this possible).  You'll always be one buffer's worth of time behind in terms of what you show on the screen versus what is being captured, but so what?  Once someone hits the "stop" button or the trigger conditions cause the scope to stop capture, that's the point at which the screen can "catch up".

The reads from RAM obviously have to be at least as fast as the writes to it in order for that to work, of course.  But the limitation there is very likely to be in the processor, not the RAM, since the processor has to perform multiple operations per sample in order to do a proper histogram (especially one with decay, as is needed for intensity-graded display).


Quote
This can be separate to the acquisition memory, or share it with double buffering. Streaming the 100+ Gb/s to ram is relatively simple and cheap compared to creating the 2d histogram at high throughput.

Sure, but the histogram is a decimation of the data in main memory, something that doesn't require a target memory region that's nearly as large as the capture memory.  Indeed, it really only needs to be sized proportionally to the number of pixels on the screen.   The use of DDR3 memory for the main capture buffer has nothing to do with this.


Quote
As mentioned earlier in the thread there are scopes with deep memory and scopes with high throughput, there aren't scopes that do both simultaneously.

Right.  But why?  What is it about the basic architecture of the above that is somehow inadequate to the task?  I'm not arguing that the architecture above will work.  I'm trying to figure out why it won't, because I presume the third option I mentioned previously is the one that's in play here.


I'm able to get 8 GB/s write throughput using a simple C program on my computer.  It's nothing fancy.  That throughput is achieved using a mid-2011 Mac Mini with a 2.4GHz Core i7.  For 1 GS/s, the memory architecture in my computer, which isn't optimized for sequential writes, has 8 times the needed bandwidth, with technology that's over 5 years old.  My newest desktop gets nearly twice that in a virtual machine, and isn't even configured with multiple banks of memory.

I definitely agree that the highest-end scopes that are doing 40GS/s (like the Keysight 90000 series) will almost certainly require something faster than DDR3, unless the memory is organized in parallel access banks or something equivalent.  That kind of throughput does require a super-fast special-purpose ASIC.   But we're talking about lower end scopes that do sample rates of something like 2GS/s, are we not?
« Last Edit: March 09, 2017, 09:19:55 am by kcbrown »
 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: Oscilliscope memory, type and why so small?
« Reply #48 on: March 09, 2017, 09:15:58 am »
But that's just it: you don't have to get all the points to the screen.

As soon as you switch on any meaningful signal analysis this whole "points on the screen" concept does not cut it. DSO is not CRO with colorful TFT display, but much more powerful tool - not only for simple glitch hunt. If you apply statistical math to raw non-decimated sample points then non-obvious (looking at single-shot spec) performance levels can be achieved.

 

Online Someone

  • Super Contributor
  • ***
  • Posts: 5016
  • Country: au
    • send complaints here
Re: Oscilliscope memory, type and why so small?
« Reply #49 on: March 09, 2017, 10:15:20 am »
To get sustained gigapoints/second throughput to the screen (over multiple channels) requires hardware dedicated to doing this by way of an ASIC or large FPGA.
But that's just it: you don't have to get all the points to the screen.  You have to get a representation of them to the screen.  And you only have to do so at a rate of something like 30 times per second.  You can stream the data for this from primary RAM in the buffer that isn't being written to (double buffering makes this possible).  You'll always be one buffer's worth of time behind in terms of what you show on the screen versus what is being captured, but so what?  Once someone hits the "stop" button or the trigger conditions cause the scope to stop capture, that's the point at which the screen can "catch up".

The reads from RAM obviously have to be at least as fast as the writes to it in order for that to work, of course.  But the limitation there is very likely to be in the processor, not the RAM, since the processor has to perform multiple operations per sample in order to do a proper histogram (especially one with decay, as is needed for intensity-graded display).
You do have to get all the points from the acquisition to the screen or otherwise you won't see the interesting characteristics, two ways to do it:
Double buffer the acquisition and let a slow process render the display (such as a CPU), while waiting for that you can't see any of the samples coming in.
or
Have a realtime hardware system render the view, no (or much fewer) missed samples but limited in memory size as .

Even just looking at a long capture with Mpts+ you need to see the details so you know where to zoom in for further study (or have some intelligent tool search for you) so approximations such as envelopes (min/max etc) aren't always enough.

This can be separate to the acquisition memory, or share it with double buffering. Streaming the 100+ Gb/s to ram is relatively simple and cheap compared to creating the 2d histogram at high throughput.
Sure, but the histogram is a decimation of the data in main memory, something that doesn't require a target memory region that's nearly as large as the capture memory.  Indeed, it really only needs to be sized proportionally to the number of pixels on the screen.   The use of DDR3 memory for the main capture buffer has nothing to do with this.

I definitely agree that the highest-end scopes that are doing 40GS/s (like the Keysight 90000 series) will almost certainly require something faster than DDR3, unless the memory is organized in parallel access banks or something equivalent.  That kind of throughput does require a super-fast special-purpose ASIC.   But we're talking about lower end scopes that do sample rates of something like 2GS/s, are we not?
The 9000/90000 series have quite slow update rates and large blind time, since they just pile the acquisition up into a big buffer and then slowly read it out for processing and display, they don't have the realtime hardware but as you say a very wide memory bus to queue the acquisition in. On the issue of cost notice how they ship with all the memory installed and you unlock the deeper memory with a software license ;)
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf