In the digital world, any 10..90 or 20..80% rise/fall variations are of any interests.
So I would like:
- to measure digital 3.3..5V references signals given form OXCO or clock distributions as in the range of 5...100MHz
- this means, an internal or external accurate reference is needed. while the DUT PN OXCO is about -120 rtHz @ 1Hz or even lower
- this requires a high impedance differential connection/probe to the DUT
- also to measure any ripple of the analog/digital power on ADC/DAC as on VRef
- a Histogram would tell, all or use any pricey LeCroy gear with Jitter SW
IMHO the picture tells all but do not convince me, whether we measure the DUT jitter or internal used TXCO reference jitter.
Hi SJL-Instruments !
My name is David and I'm responsible for all of the electronics engineering at https://thinksmartbox.com/
We design custom tablet computers for people with disabilities.
I would like some help to understand if your device can be used for USB SI testing up to USB 3 Gen 2 (10gbit). I admit I'm not that familiar to problems in the GHz range.
Here's a quick summary of the test specs, compiled by R&S https://www.rohde-schwarz.taipei/data/activity/file/1644474550064631375.pdf
My questions are:
- Are you familiar with the standard? have you ever tried anything like this, or do you have customers that have done it?
- Can your device meet the requirements for USB 3 Gen 1 (5gbit) ?
- Can your device meet the requirements for USB 3 Gen 2 (10gbit) ?
Yep, these numbers look correct. Units are in probability density per volt. For intensity-grading, the overall scaling is arbitrary and is controlled by the brightness slider. The official software has some auto-ranging for convenience, but it’s not necessary.
...
The first point in your result (with a negative dV) should be thrown out. Each final PDF value corresponds to the interval between two neighbouring CDF points.
Thanks for double checking my work. ... I will need to think about how best to plot it. ..
I enjoyed watching their review. I almost wish they had given you another week to work on it. Even from the time of the video's release you have made several improvements. It's too bad that the speckles are a focal point. I was talking out loud as he started trying different settings to improve it, crank up the triggers!!![]()
For starts I need to sort out how to plot the PDF. I collected raw PAM4 data and attempted to post process it. Any tips on how you converted the PDF back into the voltages you plot?
***
Shown is looking at the raw PDF data for the PAM8 signal. While I can see the 8 distinct levels, obviously this is not correct.
***
Looking at the PAM4 data (20k triggers, 100 CDF samples, basically the same settings that with your software will produce a decent looking eye. I was surprised how much the PDF varies. On the right side, you can see the sorted PDF values (48,000 total), represented by 366 unique levels ranging from 1264 to 0. Negative PDFs were set to 0.
Sorting the highest PDF values and then indexing to their corresponding voltage, I get the display on the left (pure guess on my part that is what you are doing). I then look at the PDF distribution (right histogram) and sort for the areas with the highest peaks. I then search for only data that falls within a small percentage of these, which gives me the plot in the center. Does a fair job de-speckling the data but we are also missing some of the good data.... So not a good solution. I tried a few other simple corrections. My take away, it's not a super simple problem.
I was building pretty much this about 12 years ago: https://www.eevblog.com/forum/projects/diy-ghz-sampling-head-for-lt100mhz-scopes/msg971961/#msg971961 but didn't really get around to finish it. (that post is much newer than the project itself). I didn't know how to do FPGAs back then, so there's a bit more discrete ECL logic on my design. Really nice to see that someone took that concept and turned it into a well-polished modern product!
I built the ring oscillator out of a meandering trace rather than the adjustable delay line since I thought that it'd have lower jitter.
Since there was no FPGA in my design and everything was controlled by an MCU, the triggering rate was much slower. To somewhat compensate for that, I didn't sweep the comparator threshold to build the CDF. Instead I used the comparator to build a SAR ADC that does one bit per trigger event. This approach obviously falls apart when there's significant noise or jitter on the measured signal, but worked well enough for my purposes.
The PDF values shouldn't be sorted. Each PDF value corresponds to the region between the two voltage values from which it was derived. The interval within this region should be shaded with intensity proportional to the PDF value.
This is why there is one less PDF value than the count of voltages you start with.
...
Filtering each individual PDF distribution is not a good approach, as you've found, since it biases the data downwards. This will remove rare features and lead to inaccurate visualization of the data.
...
Of course, the firmware and software changes should largely fix the root issue, and the above technique will just deal with the residual statistical noise.
Looks like a job for a median filter to me. Or if you want to get a little fancier...
Doing a vertical scan that requires cells above and below to be active, we are already loosing a fair amount of good data. I would need to do a horizontal scan as well. It could certainly be done but what a mess...
In the long-term, the speckle issue can be completely eliminated with a dual-comparator design. If/when we introduce a new model of the GigaWave (or a dedicated SI analyzer), we will implement this.
Comparing the latest 2.5.12 with the earliest version I have, 2.5.3. Both in demo mode. Attempting to set intensity to give the same shading.
IMO, these speckles are always going to raise questions for the user if they are dealing with a scope or a signal problem. Looking forward to the updated firmware. I have not tried to bump the triggers above 30k as suggested.
I am sure you were aware of the speckles early on in the design phase. Most likely before even starting on the hardware. I envision the signal processing was simulated first, but maybe not. I am curious if you knew changing the architecture would have solved it, why didn't you just change it. Was the added cost really that big of a factor?
Had the dual-comparator approach been used, how would it have effected the sweep speed compared to your future firmware approach?