You need to test, we're here to help.

You need to test, we're here to help.

05 February 2014

Why Should You Measure Jitter?

Typical channel with jitter sources
Figure 1: Designing a serial-data channel with first-pass
success means analysis and mitigation of jitter sources
As mentioned in an earlier post on some basics of jitter, the bane of serial-link design is a signal that doesn't arrive at its destination when it should, whether early or late. The goal of serial-link design and implementation is to transmit data with as few bit errors as possible. Thus, analyzing jitter is a key element of achieving first-pass design success.

The physical makeup of a typical serial-data channel (Figure 1) is chock-full of structures that are potential sources of jitter. Impedance mismatches can crop up anywhere in the critical path, which includes elements such as microstrip lines, vias, connectors, decoupling capacitors, and board/chip interfaces.

The raw truth is that jitter results in bit errors. There is a proper, or expected, time for a signal to pull into the station, so to speak, and when it fails to do that, well, that's no way to run a railroad (or a serial link). Wrong edge timing begets incorrect latching, which begets... you guessed it... bit errors.

Two examples of bit errors
Figure 2: Two examples of bit errors
Let's look at two simple examples as shown in Figure 2. Here we see two signals latched as low or zero. The vertical cursor at the latch (strobe) time represents the point in time at which we expect these signals' voltages to surpass the crossing detection level (the horizontal cursor). Unfortunately, one crosses the detection level too late, while the other simply fails to cross at all. Both of these instances would be chalked up as bit errors.

It's impossible to design a high-speed serial-data channel that is completely free of bit errors. But what the design team should be chasing is the lowest possible bit-error rate (BER). The specifications for most serial-data protocol standards demand a very low BER. For example, the specification for USB 3.1 calls for a BER of less than one in every 10-12 bits at data rates of 5 GT/s. Failing to meet that requirement can prove quite costly.

Figure 3: A graphic illustration
of TIE
The difference between the measured time of arrival of an edge and the expected time of that edge's arrival represents its time-interval error (TIE). In other words, TIE describes how early or late an edge arrives vs. its expected arrival time. In Figure 3, the yellow trace represents a clock signal while the blue trace represents a data signal expected to cross the crossing detection level at the same time as the clock. As we can see, the data signal's crossing is late relative to the clock, and that rings up as a bit error.

While a low BER is the overall goal for the data channel's design, the quantification of the jitter that causes bit errors is arrived at through measurement and subsequent analysis of time-interval errors (TIE).

Hopefully, this post fills in the reason why one would measure jitter (quantified in TIE) in the first place. Measuring time-interval errors is a multi-step process, and in subsequent posts in this series on jitter, we'll cover that process as well as the subsequent analysis of the measurement results.


No comments: