|Figure 1: The first step in quantifying jitter is to|
determine measured times of arrival for bit transitions
As noted, jitter causes bit errors because an edge arrives too late or too early. The measurement of this earliness or lateness is the time-interval error (TIE). We measure the TIE values in the waveform by first determining the measured arrival times of the edges in the bit stream (Figure 1). That's derived by determining when each edge crosses a threshold. Many of today's real-time digital oscilloscopes are able to capture this data.
|Figure 2: Determining expected edge arrival times|
with a clock/strobe signal being transmitted
The first scenario is when a reference clock and/or strobe is transmitted, as with dual-data-rate (DDR) memory channels, where a strobe signal derived from a clock latches the bit high or low (Figure 2). In such scenarios, the expected arrival times of the data edges are defined by the measured arrival times of the strobe signal's edges. Again, the crossing time of the strobe signal will almost always fall between samples, so interpolation is needed to determine the actual arrival time.
The second scenario is when no clock or strobe signal is transmitted, as with the USB protocol. In these cases, a clock and data recovery (CDR) circuit comes into play. The receiver generates a clock from an approximate frequency reference and then phase-aligns to the transitions in the bit stream with a phase-locked loop (PLL). A software CDR algorithm in the oscilloscope uses the bit stream to recover the underlying clock
|Figure 3: Histogram of the ΔT algorithm between |
successive edges, from which the bit rate can be gleaned
|Figure 4: A jitter track, encompassing the TIE values|
for all edges in the waveform, is the basis for
subsequent jitter analysis.
In later installments, we'll look at laying the groundwork for jitter analysis and at how one goes about breaking jitter down into its component parts.