Figure 1: Latching a signal at the outermost of the blue
hash marks results in a BER of 10-3, while latching it
at the innermost hash marks yields a BER of 10-12
Looking at BER as a function of jitter seen at a receiver depends on two things: how much jitter the signal has, and where the signal is being latched (Figure 1). Latching the signal close to the crossover point might yield a BER of 1 bit in 1000. Moving the latching point nearer to the middle of the eye gets the BER down to 10-12, or 1 bit in 1 million million bits.
Figure 2: The classic "bathtub curve" relating a receiver's
timing margins to BER based on the signal's jitter
The graph doesn't allow us to say declaratively that the eye is X picoseconds wide. But it does allow us to say that with a BER of, say, 10-3, that the eye is X picoseconds wide. Thus, we are able to measure the duration of the eye opening by measuring the BER at a given level at either side of the eye.
The inverse of the concept of "eye opening @ BER" discussed above is that of "total jitter (Tj) @ BER." The notion of how much jitter there is closing the eye is analogous to peak-to-peak jitter, but with a certain statistical significance. If we were to look at 1000 bits, what's the value of peak-to-peak jitter we should expect to see? If we were to look at 1 million bits, we would expect 10-9 or 10-12. So statistically speaking, we have an idea of peak-to-peak jitter but with a confidence factor.
|Figure 3: The Dual Dirac jitter model permits calculation|
of Tj @ BER at any arbitrary BER value
If we gather a histogram of edge-crossing points, can we extrapolate out from that histogram to what the total jitter would be at a given BER? As it turns out, yes, we can, by fitting Gaussian functions to the tails of the distribution. Two values jump out that we need to fit: One is the sigma of the Gaussians, which corresponds to the random component of the jitter, and the other is the separation between the Gaussian mean values, which is the model's "view" of the deterministic jitter component. The deterministic components are what define the shape of the distribution between the tails.
What we're talking about is known as the Dual-Dirac model, which came to prominence in the Fibre Channel methodologies for jitter and signal quality dating to the late 1990s (Figure 3). If you can take your data and fit those two Gaussian functions to it, those two values (the Gaussian sigma and separation between Gaussian means) can be plugged into the equation in Figure 3 to extrapolate the total jitter expected at any given BER value. The Dual-Dirac model, at least theoretically, gave us a reliable and repeatable way to quantify jitter through statistical modeling.
Yet, having said all of that, tail fitting is a difficult problem. In fact, the same document that is the source for Figure 3 (Fibre Channel MJS, 1998) includes a footnote saying that "the most common technique for determining the best fit involves the human eyeball." Work was ongoing for development of a tail-fitting algorithm but the problem is that the tails of the histogram are, by definition, the area in which we have the least data to work with. Thus, it takes a large data set before the result converges.
Fortunately, changes would come as the 1990s ended and a new millennium began that would alter the jitter-measurement landscape considerably. We'll pick up the history of jitter in a forthcoming post.