Author Topic: Oscilliscope memory, type and why so small?  (Read 32031 times)

0 Members and 2 Guests are viewing this topic.

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5016
  • Country: au
    • send complaints here
Re: Oscilliscope memory, type and why so small?
« Reply #100 on: March 12, 2017, 10:15:45 pm »
Using DDR memory with an FPGA is easy nowadays. Just drop a piece of pre-cooked IP in your design and done. Xilinx allows to have multiple narrow memory interfaces to the fabric (basically creating a multi-port memory) or a really wide one. In the Xilinx Zync the memory is shared between the processor and the FPGA fabric through multiple memory interfaces.
And then you explicitly have the FPGA logic to push all the high throughput computation onto. Its a nice part for that application but the memory bus on the Zynq parts are probably a little small, the hard interface for the processor RAM is only a 32bit DDR bus at 400-500MHz:
http://blog.elphel.com/2014/06/ddr3-memory-interface-on-xilinx-zynq-soc-free-software-compatible/

So you're back to needing a special wide memory interface for acquisition with the costs associated and limited bandwidth to a CPU.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28111
  • Country: nl
    • NCT Developments
Re: Oscilliscope memory, type and why so small?
« Reply #101 on: March 12, 2017, 11:13:48 pm »
Maybe you should stick to Xilinx' datasheets/documentation because they told me an entirely different story.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5016
  • Country: au
    • send complaints here
Re: Oscilliscope memory, type and why so small?
« Reply #102 on: March 12, 2017, 11:40:55 pm »
Maybe you should stick to Xilinx' datasheets/documentation because they told me an entirely different story.
Or perhaps you don't actually know what you're talking about? The Zynq Ultrascale parts have 64bit hard memory controllers, but look at the pricing on those some time compared to the 7000 series Zynq parts (which only have a 32bit hard memory controller). I use these devices, I know them well and the capabilities of them. Its possible to build wide high bandwidth memory controllers in the logic resources and link that back to the CPUs but it gets expensive very quickly to have enough suitable pins on the device and there are still bottlenecks getting data into the CPU.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28111
  • Country: nl
    • NCT Developments
Re: Oscilliscope memory, type and why so small?
« Reply #103 on: March 13, 2017, 12:08:58 am »
Then link to the documentation and not some random website. The documentation says the memory interface can handle 32bit 1800Mb/s which makes 7.2GB/s.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 5016
  • Country: au
    • send complaints here
Re: Oscilliscope memory, type and why so small?
« Reply #104 on: March 13, 2017, 12:15:49 am »
Then link to the documentation and not some random website. The documentation says the memory interface can handle 32bit 1800Mb/s which makes 7.2GB/s.
And? In the context of this discussion its not even close to the rates needed to sustain even a 4 channel 1Gs/s scope continuously. You can chip away with all your little "clever" remarks and miss the big picture...
« Last Edit: March 13, 2017, 12:18:08 am by Someone »
 
The following users thanked this post: tautech

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28111
  • Country: nl
    • NCT Developments
Re: Oscilliscope memory, type and why so small?
« Reply #105 on: March 13, 2017, 12:34:29 am »
Well... connect extra memory or put 2 Zyncs on the board but 4x1GB/s is still less than 7.2GB/s so I guess you meant a higher samplerate. Having more Zyncs doubles the processing speed as well. In the end you just need to transfer the data from one to the other to draw a relatively small image on a screen.
« Last Edit: March 13, 2017, 12:36:37 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline kcbrown

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #106 on: March 13, 2017, 04:01:48 am »
I was presuming that the acquisition FPGA could write directly to the acquisition DRAM, and the acquisition memory could be set up in a double-buffered ("banked"?) configuration so that the FPGA's writes wouldn't collide with reads performed by the downstream processing pipeline.  Which is to say, the FPGA's writes would go through a switch which would send the data to one DRAM bank or the other, depending on which one was in play at the time.

That is really difficult to do with modern high performance DRAM short of building your own memory controller with two interfaces.  Older DSOs did work this way by sharing the memory bus.

A dual-port memory controller is pretty much what I had in mind.  I would be surprised if such controllers don't already exist, but only mildly so.  If they do exist, I imagine they're rather expensive.

I'd been struggling to remember the name of the interleaving mechanism that has been used in PCs to improve memory throughput, but recently found it again: dual-channel architecture.  But what we're talking about here isn't quite the same thing.


Quote
The Zynq solves a lot of problem but can its serial interfaces connect to ADCs and operate continuously?  These things cost as much as an entire Rigol DSO.  A dedicated FPGA for each channel would be cheaper.

I was thinking of the Zynq 7000 series that's used in the entry-level Instek DSOs.  Those clearly can't be that expensive unless Instek is losing a bunch of money on them.   I don't know anything about the Zynq, however, so have no idea what its limitations are.  It's obviously good for something in the DSO world, since Instek appears to be making good use of them.


Quote
It did lead me to wonder how many designers of DDR2/DDR3 memory controllers have gone insane.

 :-DD

I imagine balancing the refresh requirements of DRAM with the need for minimized latency and maximized throughput would drive nearly all of them insane!


Quote
In the recent past I looked at a modern SRAM based design but the parallelism of DDR2/DDR3 has a lot of advantages even though they are complicated.  Just using the FPGA or ASIC's internal memory has a lot going for it.

It certainly seems like it should simplify things quite a lot.  At a minimum, you don't have to build your own memory controller!

As has been mentioned, there are apparently DRAM controller designs that are ready-to-go for FPGA use, but just because they exist doesn't mean they're terribly good.   Indeed, it seems to me that the nature of DRAM is such that you essentially have to design the memory controller around your expected access patterns in order to maximize whatever performance metric you're after (latency, throughput, etc.), and a DSO's typical access patterns are probably rather different from the usual pattern (though their sequential nature might imply that designing a memory controller to accommodate it might be easier than for the usual case, and it may be the case that existing designs already accommodate this case quite well).

 

Offline kcbrown

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #107 on: March 13, 2017, 04:53:49 am »
Does anyone here have an Intel-based SBC like the UP board or something?  It would be interesting to see what results my test program generates on such a system.  The CPUs on those are more oriented towards embedded use, but probably have the same instructions that seem to make the test program very fast on PC hardware.
 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: Oscilliscope memory, type and why so small?
« Reply #108 on: March 13, 2017, 09:09:16 am »
All this talk is very interesting but hard to see a point. Clearly there are two completely different approaches. One is glitch hunters approach which is ok with tiny memory, but need very high wfm rate. Glitch hunter will say large mem is s*it and trouble, because hard to achieve high wfm.
Now other approach (very large memory) is more geared towards R&D in controlled environment. Wfm rate is crap, but you have NO blind time if stuff fits into record:
For example 4MB very fast wfm scope will still make ~0.4s record into 500 chunks, running at 5GSa/s. While 2GB 5GSa/s scope will just soak it all in without any blind time at all. Would look good in marketing: "infinitely better".
Now lets say if youre a scientist, would you prefer reconstruct (with terminal loss) from 500 chunks or just take 2GB straight into your MatLab whatever? :-DD

Edit: It just occured might have missed little detail, 4MB fast wfm one cannot offload 0.4s of even chopped data in principle because there is nowhere to store it at that rate... :-//

« Last Edit: March 13, 2017, 10:29:38 am by MrW0lf »
 

Offline kcbrown

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #109 on: March 13, 2017, 01:55:33 pm »
All this talk is very interesting but hard to see a point. Clearly there are two completely different approaches. One is glitch hunters approach which is ok with tiny memory, but need very high wfm rate. Glitch hunter will say large mem is s*it and trouble, because hard to achieve high wfm.

I fail to see why you can't have both.   More precisely, I fail to see how a well-designed large memory system will necessarily result in a lower waveform rate.

Suppose you have a capture system which captures all of the samples continuously.  Suppose too that you have a triggering system that does nothing but record the memory addresses of every triggering event.

If you're glitch hunting, then you want to maximize the waveform update rate.  Assuming a triggering subystem that can keep up with the sampling rate, the waveform update rate is going to be determined by how quickly you can scan a display window's worth of data and show it on the screen.  But that data is anchored at the trigger point, so you have to have trigger data anyway.  Your processing rate is what it is, and may be slow enough that it would have to skip triggering events if those events are too closely spaced in time.  But how can the waveform update rate possibly depend on the size of the entire capture when the displayed time range shows a subset of that?   More to the point, if you have a system with a small amount of capture memory, then you're forced to sample at a slower rate in order to capture a longer period of time, but that's no different from subsampling a larger buffer from a faster capture.  And that's true even of the triggering mechanism, if need be.  Sure, the triggering mechanism wouldn't see all of the possible triggering events if it's subsampling the memory, but that is no different from when the sampling rate is reduced to allow the capture to fit into a smaller amount of memory.


Sure, it's desirable to process all of the data in the capture buffer (or the subset that represents the time window covered by the display), but how is it an advantage to reduce the amount of memory available to the capture system?  I see no upside to that save for cost savings.  But the advantages of a larger capture memory are undeniable.

« Last Edit: March 13, 2017, 01:58:37 pm by kcbrown »
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17219
  • Country: us
  • DavidH
Re: Oscilliscope memory, type and why so small?
« Reply #110 on: March 13, 2017, 05:09:47 pm »
All this talk is very interesting but hard to see a point. Clearly there are two completely different approaches. One is glitch hunters approach which is ok with tiny memory, but need very high wfm rate. Glitch hunter will say large mem is s*it and trouble, because hard to achieve high wfm.

I fail to see why you can't have both.   More precisely, I fail to see how a well-designed large memory system will necessarily result in a lower waveform rate.

Suppose you have a capture system which captures all of the samples continuously.  Suppose too that you have a triggering system that does nothing but record the memory addresses of every triggering event.

If you're glitch hunting, then you want to maximize the waveform update rate.  Assuming a triggering subystem that can keep up with the sampling rate, the waveform update rate is going to be determined by how quickly you can scan a display window's worth of data and show it on the screen.  But that data is anchored at the trigger point, so you have to have trigger data anyway.  Your processing rate is what it is, and may be slow enough that it would have to skip triggering events if those events are too closely spaced in time.  But how can the waveform update rate possibly depend on the size of the entire capture when the displayed time range shows a subset of that?   More to the point, if you have a system with a small amount of capture memory, then you're forced to sample at a slower rate in order to capture a longer period of time, but that's no different from subsampling a larger buffer from a faster capture.  And that's true even of the triggering mechanism, if need be.  Sure, the triggering mechanism wouldn't see all of the possible triggering events if it's subsampling the memory, but that is no different from when the sampling rate is reduced to allow the capture to fit into a smaller amount of memory.

I like Knuth's recommendation about looking at problems from the perspective of data organization.

Assume you are going to produce a histogram from the long record and display it and let's say the display record is 1024 x 1024.  (1) So every histogram is 1Mword x depth.  For a small depth of 8 bits, you just blew out all of the L1 CPU data caches by 8 or 16 times which is going to have a major impact on performance.  You probably also blew out the L2 caches.  Since the processing is amendable to parallelization, multiple cores can be used which gives us more L1 cache but 8 or 16 of them?  And that is with an inadequate depth.  How about 32 or 64 cores? (2)  FPGAs or dedicated ASICs suddenly seem a lot more reasonable.

Maybe a GPU can handle this but someone else will have to answer because I am not familiar enough with GPU memory organization.

Quote
Sure, it's desirable to process all of the data in the capture buffer (or the subset that represents the time window covered by the display), but how is it an advantage to reduce the amount of memory available to the capture system?  I see no upside to that save for cost savings.  But the advantages of a larger capture memory are undeniable.

Last time we discussed this, I came to the conclusion of "why not both?" which is sort of the result that the Rigol DS1000Z series produces where most of what you see and what measurements are made on is a short record just long enough for the display. (3)

Produce the histograms in real time but also capture the raw record; the bandwidth required for the histograms is so high that you might as well assuming you have memory available to hold it.  Actually, produce multiple histograms with low depth ones operating at the maximum sample rate and deeper histograms operating at fractions of the sample rate because otherwise an ASIC or really big (expensive) FPGA is required to get enough memory bandwidth.

(1) With apologies to Jim Williams, I'm sorry, but the lowest resolution DSO in my house is 1024 x 1024.

(2) I am sure someone is doing this somewhere between high end DSOs and specialized digitizer territory.

(3) It has to be this way or peak detection would not work.  Hmm, does it work?  Has anybody actually tested it on Rigol's Ultravision DSOs?  I remember testing this on the Tektronix 2230 which may have been the first DSO with this feature and finding that it did indeed work which was the surprising result; the 2230 is a little weird and Tektronix was operating the vertical channel switch in a way not intended.
 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: Oscilliscope memory, type and why so small?
« Reply #111 on: March 13, 2017, 09:09:53 pm »
Last time we discussed this, I came to the conclusion of "why not both?" which is sort of the result that the Rigol DS1000Z series produces where most of what you see and what measurements are made on is a short record just long enough for the display.

Rigol approach makes indeed sense with very limited hardware to use, but they messed up one key point: you can decimate, but not destroy data! They probably did this to make use of some limited-bit integer calculus, which is of course fast, but effectively nulls out accuracy gain you can have with stats collection. Stats on Rigol effectively do not work because of this. Initially I was hoping do same trickery I do with new scope... when needing extra accuracy (with repeating signals) just crank up stats up to 1000x and enjoy extreme timing accuracy even with low sample sets (max 16kpts with ETS).
Rigol got ideas from Keysight MegaZoom obviously... What I still do not get about MZ is that even Keysight says it's only "screen pixels", but you get substantially more accuracy than with Rigol. *SOX3000T seem to deliver about 20kpts-like accuracy. Maybe MegaZoom only decimates, not destroys, and "statistical gain" kicks in? 20kpts-like seems about right for statistical gain on screen pixels.
I'm not sure there is yet enough market pressure for cheap super-scope having both 1M+ wfm/s and gigabytes of memory, despite it being totally possible technically (need hardware parallelism to cover non-trivial use cases, just like with multi-core CPUs). But thats only good because having single perfect scope theres no excuse to acquire wide variety of gear... which would be very hard to accept for people suffering from various gear acquisition related illnesses :P

Edit: Seems new R&S is trying to find some middle ground with 10...20MB main mem and 160MB of segmented. Cannot find trigger rearm data, wfm/s is 50,000 (maybe worse when offloading to segmented mem?). There has to be some (cost/performance) reason to differentiate two memories.
« Last Edit: March 13, 2017, 10:54:14 pm by MrW0lf »
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3251
  • Country: ca
Re: Oscilliscope memory, type and why so small?
« Reply #112 on: March 14, 2017, 12:01:46 am »
Then link to the documentation and not some random website. The documentation says the memory interface can handle 32bit 1800Mb/s which makes 7.2GB/s.

The standard DDR3 controllers not optimized for scope construction. If you aim at standard 64-bit DIMM modules this doubles the bus width, and if you consider common 1333Mb/s modules the theoretical bandwidth 85Gb/s, which is plenty for a scope and you can use almost 100% of it. You probably cannot get to 1333Mb/s with entry-level (under $50) FPGA, but you can get close. 4 channels at 1Gs/s is only 32Gb/s, so you might be able to use it for simultaneous acquisition and read (which requires 64Gb/s), but half-duplex use (acquisition then read) should be really easy. DIMMs go up to 16GB, which is quite a lot for a cheap setup. You only need to apply some effort, design a custom DDR3 controller and apply intelligent design - nothing impossible. I'm contemplating building it by myself, but I don't have much time.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17219
  • Country: us
  • DavidH
Re: Oscilliscope memory, type and why so small?
« Reply #113 on: March 14, 2017, 12:49:45 am »
Last time we discussed this, I came to the conclusion of "why not both?" which is sort of the result that the Rigol DS1000Z series produces where most of what you see and what measurements are made on is a short record just long enough for the display.

Rigol approach makes indeed sense with very limited hardware to use, but they messed up one key point: you can decimate, but not destroy data! They probably did this to make use of some limited-bit integer calculus, which is of course fast, but effectively nulls out accuracy gain you can have with stats collection. Stats on Rigol effectively do not work because of this. Initially I was hoping do same trickery I do with new scope... when needing extra accuracy (with repeating signals) just crank up stats up to 1000x and enjoy extreme timing accuracy even with low sample sets (max 16kpts with ETS).

It makes sense for a pretty display but its measurement accuracy leaves a lot to be desired.  One of the few really useful things I could use a DSO for is RMS noise measurements and the Rigol does not work at all for that.

My old Tektronix DSOs all use a 16 bit processing record so when averaging is used, it really works and the 10 bit displays turn smooth as glass.  In an odd way, they look like an analog oscilloscope but without the noise.

Quote
Rigol got ideas from Keysight MegaZoom obviously... What I still do not get about MZ is that even Keysight says it's only "screen pixels", but you get substantially more accuracy than with Rigol. *SOX3000T seem to deliver about 20kpts-like accuracy. Maybe MegaZoom only decimates, not destroys, and "statistical gain" kicks in? 20kpts-like seems about right for statistical gain on screen pixels.

I would have to play with them to try and figure out what is going on.

Quote
I'm not sure there is yet enough market pressure for cheap super-scope having both 1M+ wfm/s and gigabytes of memory, despite it being totally possible technically (need hardware parallelism to cover non-trivial use cases, just like with multi-core CPUs). But thats only good because having single perfect scope theres no excuse to acquire wide variety of gear... which would be very hard to accept for people suffering from various gear acquisition related illnesses :P

What is the case again for super long record lengths when you have delayed acquisition, DPO type processing, and segmented memory?

Quote
Edit: Seems new R&S is trying to find some middle ground with 10...20MB main mem and 160MB of segmented. Cannot find trigger rearm data, wfm/s is 50,000 (maybe worse when offloading to segmented mem?). There has to be some (cost/performance) reason to differentiate two memories.

That sort of sounds like what I was suggesting.  Generate the histograms (plural) which use less memory but require maximum bandwidth in the ASIC or FPGA while streaming the original record to slower but larger memory.  Update the screen at the screen refresh rate or slower.  Double buffer to double the number of waveforms per second at the cost of twice as much memory.
 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: Oscilliscope memory, type and why so small?
« Reply #114 on: March 14, 2017, 08:03:44 am »
My old Tektronix DSOs all use a 16 bit processing record so when averaging is used, it really works and the 10 bit displays turn smooth as glass.  In an odd way, they look like an analog oscilloscope but without the noise.

Tek is still pretty good at that, wfms look rather sharp and luxorious compared to equivalent Keysight having them looking oversmoothed.

What is the case again for super long record lengths when you have delayed acquisition, DPO type processing, and segmented memory?

I can see at least 2:
- decoding: dumping all into non-stop record, having full event table over whole record, and then n+1 zoom windows to inspect + search/sorting etc capability
- monitoring of physical processes, again cannot beat non-stop record over whole process
So in general - in-depth analysis of fully detailed data, not mixed (DPO) or chopped.

 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3251
  • Country: ca
Re: Oscilliscope memory, type and why so small?
« Reply #115 on: March 14, 2017, 01:31:02 pm »
What is the case again for super long record lengths when you have delayed acquisition, DPO type processing, and segmented memory?

If you want to record some sort of events and work with them (e.g. compare the event you've got today with an event you had a month ago), you want memory long enough to capture the whole event, whatever this might be.

Segmented memory is also a memory. The more memory you have, the more segments you can acquire. Most scopes move segments to some other kind of "slow" memory, losing some of the segments in the process (hence wfms/sec limit). Bigger acquisition memory may let you acquire more segments, eliminate gaps between segments etc.

 

Offline nfmax

  • Super Contributor
  • ***
  • Posts: 1604
  • Country: gb
Re: Oscilliscope memory, type and why so small?
« Reply #116 on: March 14, 2017, 01:52:03 pm »
What is the case again for super long record lengths when you have delayed acquisition, DPO type processing, and segmented memory?

If you want to record some sort of events and work with them (e.g. compare the event you've got today with an event you had a month ago), you want memory long enough to capture the whole event, whatever this might be.

Of course you do - but that is not an oscilloscope. It's what is called a 'transient recorder' or 'high-speed digitiser', and they are available with very large amounts of memory, e.g. http://www.keysight.com/en/pc-1128783/High-Speed-Digitizers-and-Multichannel-Data-Acquisition-Solution?pm=SC&nid=-35556.0&cc=GB&lc=eng. No display processing all, though.
 
The following users thanked this post: Someone

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3251
  • Country: ca
Re: Oscilliscope memory, type and why so small?
« Reply #117 on: March 14, 2017, 02:32:43 pm »
Of course you do - but that is not an oscilloscope. It's what is called a 'transient recorder' or 'high-speed digitiser', and they are available with very large amounts of memory, e.g. http://www.keysight.com/en/pc-1128783/High-Speed-Digitizers-and-Multichannel-Data-Acquisition-Solution?pm=SC&nid=-35556.0&cc=GB&lc=eng. No display processing all, though.

Sure, but we're talking about adding similar capabilities to a scope for the likes of $100 or so, because DDR3 memory is mass produced and cheap.
 

Offline R005T3r

  • Frequent Contributor
  • **
  • Posts: 387
  • Country: it
Re: Oscilliscope memory, type and why so small?
« Reply #118 on: March 14, 2017, 04:19:36 pm »
And also less tested than DDR2.

Whereas is a great idea to use a DDR3 memory because it's faster, but will it be relaiable enough within a safe margin? The same goes why bother using windows XP in a super-duper-6-digit-priced spectrum analyzer? It's more tested.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3251
  • Country: ca
Re: Oscilliscope memory, type and why so small?
« Reply #119 on: March 14, 2017, 05:07:01 pm »
And also less tested than DDR2.

Whereas is a great idea to use a DDR3 memory because it's faster, but will it be relaiable enough within a safe margin? The same goes why bother using windows XP in a super-duper-6-digit-priced spectrum analyzer? It's more tested.

DDR3 is probably in 80% of all computers now and doing well. I don't think you can test anything more than this.

DDR2 is obsolete and harder to buy.

DDR4 requires fast expensive FPGAs.
 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: Oscilliscope memory, type and why so small?
« Reply #120 on: March 14, 2017, 08:57:24 pm »
DDR3 is probably in 80% of all computers now and doing well.

AFAIK in Daves on-the-go teardown of Pico 5000 (#521) he discovered some DDR indeed. Doubt my 2408B is much different. Released later, but basic model. Just tested: 1.285Mwfm/s in 10,000wfm burst @ 1GSa/s. So guess DDR concept works at least for batch acquisition.
 

Offline R005T3r

  • Frequent Contributor
  • **
  • Posts: 387
  • Country: it
Re: Oscilliscope memory, type and why so small?
« Reply #121 on: March 17, 2017, 02:49:43 pm »
And also less tested than DDR2.

Whereas is a great idea to use a DDR3 memory because it's faster, but will it be relaiable enough within a safe margin? The same goes why bother using windows XP in a super-duper-6-digit-priced spectrum analyzer? It's more tested.

DDR3 is probably in 80% of all computers now and doing well. I don't think you can test anything more than this.

DDR2 is obsolete and harder to buy.

DDR4 requires fast expensive FPGAs.
There might be also other specifications we are unaware of that make the DDR2 memory suitable for this pourpouse, and I doubt it's a price point. 
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3251
  • Country: ca
Re: Oscilliscope memory, type and why so small?
« Reply #122 on: March 17, 2017, 06:10:46 pm »
There might be also other specifications we are unaware of that make the DDR2 memory suitable for this pourpouse

No.

DDR2 is not that much different from DDR3 - slightly higher voltage, less sophisticated protocol. Thus DDR2 is a bit slower. Best DDR3 is about 2 times faster than best DDR2. In practical terms, the difference is less because 2133Mb/s DDR3 is rare and difficult to deal with. So, practically you get 1333Mb/s from DDR3, perhaps even less, while most common DDR2 is 800Mb/s. Otherwise they're both the same.

The bandwidth utilization of the memory is very low - perhaps 20-30%, if not less. Therefore, the bandwidth provided by the memory is less than the bandwidth required for continuous acquisition. The bandwidth can be increased by few means:

- Increasing bus width x2 - x4 times
- Using better controllers x4 - x5 times
- Using faster memory technology up to x2 times

So, the means for bandwidth increase are easy to achieve and diverse. Once the memory bandwidth catches up to par with the acquisition, you gain access to cheap memory for PC computers. Imagine, a customer spends $100 for a DIMM to upgrade his scope to 16GB of acquisition memory by simply replacing the existing DIMM in the scope (as you would do with the computer).

However, modern scope designers, instead of increasing the memory bandwidth, use various methods to lower bandwidth requirements such as intermediary memory, gaps in the acquisition process etc. Hence we do not see scopes with substantial amounts of acquisition memory.

The problem is certainly not a technical one.

 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: Oscilliscope memory, type and why so small?
« Reply #123 on: March 17, 2017, 07:59:43 pm »
The problem is certainly not a technical one.

http://store.digilentinc.com/digital-discovery-portable-logic-analyzer-and-digital-pattern-generator/
100MHz signal input bandwidth
2Gbit DDR3 acquisition buffer for Logic Analyzer

250MB in conventional terms then... Seems that those little USB boxes from Pico and Digilent bite harder every day...
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28111
  • Country: nl
    • NCT Developments
Re: Oscilliscope memory, type and why so small?
« Reply #124 on: March 17, 2017, 08:06:00 pm »
There might be also other specifications we are unaware of that make the DDR2 memory suitable for this pourpouse

DDR2 is not that much different from DDR3 - slightly higher voltage, less sophisticated protocol. Thus DDR2 is a bit slower. Best DDR3 is about 2 times faster than best DDR2. In practical terms, the difference is less because 2133Mb/s DDR3 is rare and difficult to deal with. So, practically you get 1333Mb/s from DDR3, perhaps even less, while most common DDR2 is 800Mb/s. Otherwise they're both the same.

The bandwidth utilization of the memory is very low - perhaps 20-30%, if not less. Therefore, the bandwidth provided by the memory is less than the bandwidth required for continuous acquisition. The bandwidth can be increased by few means:
Where did you get the bandwidth utilisation is low? If you have 1333MHz DDR3 it is capable of streaming 1333Mbit/s data continuously. Not short bursts but continuously until the memory's breaks due to age. And why DDR2 is better than DDR3 is just beyond me!  :palm: Really... read some information on the subject before posting.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf