- High speed data is clocked at 480.00Mb/s with a data signalling tolerance of ± 500ppm.
- Full speed data is clocked at 12.000Mb/s with a data signalling tolerance of ±0.25% or 2,500ppm.
- Low speed data is clocked at 1.50Mb/s with a data signalling tolerance of ±1.5% or 15,000ppm.
The specified tolerance to clock deviation in the USB spec is +-1.5%, while even the "PLL" implementations of V-USB at 12.8Mhz and 16.5MHz require +-1%?
As I understand, there are two sources of timing errors:
a) Jitter caused by discrete sampling of the input during SYNC
b) A clock-rate divergence between host and client.
For the 16.5 Mhz implementation, each USB bit-time equals 11 clock cycles.
To sample the center of a bit, we would have to sample 6 cycles after the edge of the sync-pulse, which equals the total allowable clock-error margin.
a)
"waitForK" samples the input every two cycles. That means that up to one cycle jitter is introduced.
Furthermore, another cycle can be added due to the phase difference between cpu clock and host.
Therefore, jitter reduces the clock-error margin by two cycles to 4.
b)
Since the clocks are resynched at the beginning of every packet, only the clock deviation between the end of SYNC and the beginning of EOP (SE0) is relevant. EOP is two bits of SE0 and is therefore immune to single bit clock errors.
The maximum data packet payload for low-speed USB is 8 bytes. The total relevant packet length is therefore 1+8+2=11 bytes, or even only 9 bytes when the CRC16 is ignored.
This equals 88 bits. In the hypothical worst case all data is FF, which is practictally impossible. In that case 14 bits are "stuffed", resulting in a total maximum critical packet lengths of 102 bits or 84 without CRC.
102 bit-times equal 1122 CPU clock cycles. If a maximum of 4 clock cycles deviation is allowed, then the allowed CPU clock deviation is 4/1122=0.35%. When ignoring the CRC16, the maximum allowed deviation is 4/924=0.43%. Note that almost all V-USB projects ignore the CRC16.
So, my conclusion is that the allowed clock deviation is 0.35% or 0.43% at 16.5 Mhz for receiving data, and 1.5% for sending.
Did I miss anything?
From my experience, it is possible to adjust the internal RC oscillator of the newer ATtinies within these specs. I have tried serveral corner cases (ATtiny84 at 12Mhz, ATtiny10 at 12 Mhz, ATiny 85 at 12/16 MHz, ATtiny 841 at 16 MHz) and never found an obvious timing issue.
Of course, there are some potential pitfalls:
- It is also possible that the host does not use an accurate clock. In that case, both timing errors would add up in the worst case. However, this is not relevant when the RC osc. is calibrated from the keepalive pulses.
- The RC oscillator is not immune to long term drift. This is a serious issue and would either require an application that is only active for a short time (e.g. a bootloader) or continuous recalibration. The newest generation oscillators in the ATtiny 841 have a temperature compensated RC-oscillator which should be more stable.