(Bad formatting??)
A capacitor between shield and ground is a good step (I should be more specific and say: "the shield must be RF grounded"), but -- dubious in practice. The reason is, any length of poor shielding introduces transients to the pair, and potentially disrupts communication.
A related example is using USB on a remote connector, on a cable going to a header: the cable, connector and harness might be fully shielded, but inevitably, there is that 3cm length where the shield conductor enters a pin in the header, goes through the header, and only then reaches the board. (Substitute this length with a ground trace and chip component body, for the grounded-by-cap case.) At the magnitudes of voltage, and rates of change, present in EFT and ESD, you simply can't have more than
a few mm of unshielded length in your circuit. Yes, that short link of wire or trace can easily drop 10, 20, 50V or more, during one of these transients! That's not amazing, that's just physics and linear networks.
To put it another way, to have a valid USB signal traversing that cable while it's subject to 1kV EFT pulses, it needs at least 60dB of attenuation between shield and signal! Some of that you might get for free -- you might save 3dB by geometry (the incident signal divides between shield, ground, and other cables), and another 3dB somewhere else (if EFT is being applied to a distant cable, it will be more rounded by the time that energy comes around). A 3cm wire link for the shield might do 30, even 40dB. A ground trace might even do 50dB. Oh so close! But still no cigar. So it's a tough job, and demands best practices.
The best way to do cap-grounded shielding, in my opinion, would be to build a local ground plane around the shield (using typical THT or SMT shielded connectors with pads electrically and mechanically mounting the shield itself), and couple the edges of that with chip caps, one on each side, and preferably one where the signal pair crosses the split as well. The caps therefore have no trace length (or an absolute minimum, only pad thermals and maybe a via or two), and using multiple in parallel ensures low inductance.
This kind of solid grounding pushes you into the 80dB+ range for shielding. It's good medicine, and works for anything sensitive in a noisy environment.
Needless to say, if you're putting in RF-grounding caps, a ferrite bead is superfluous. You can use a resistor if you like; 0-33 ohms is probably best (this will tend to terminate the shield's common mode and differential (i.e., versus power/signal lines) impedance working against the bypass caps).
The other thing about shields: they have impedance to the signals, whether you like it or not. In a typical USB cable, the four wires are simply wires inside a shield (the D+/D- usually being twisted, and +5/GND being twisted or not). Therefore, there is common mode impedance between the D-pair and power/ground, and to shield. In general, these will be in the 50-100 ohm range, hardly insignificant!
This is why:
There is another aspect as well (continuation form earlier) : When an ESD strike occurs on the shield, it will couple capacitively into the bundle of wires inside the cable. The Vbus, GND D+ and D- see this as a common-mode spike. sending these signals through a common mode choke prevents the coupled energy from causing any damage in your system. You force the energy to shoot into the shield and keep it out of the system (power , ground and signals)
To do it right you need a split system / shield , with the proper shunting for DC point and shortin RF energy getting out , and you need to stop the common mode being coupled in as well.
And so, if you're having to deal with solid kilovolts of spikes, your only recourse is to shunt that noise around your precious signals. You must keep a low impedance shield, to ground, so that the energy is bypassed around your circuit, along its ground.
Note when I say "shunted around the circuit": it can be done explicitly with an enclosure shield. This looks like a desktop computer, for example. The case is solid metal, and all the shielded connectors are bonded to it (with EMI fingers). All noise is shunted around this path, all noise currents due to self-capacitance, due to current riding along to other cables, whatever. Everything inside is oblivious to all this happening, because it's a Faraday cage. The continuous, low impedance shield is an absolute requirement to achieve this.
If you don't have a metallic enclosure, you're a bit more pressed for options. A circuit board with solid ground plane is as good a substitute as you can get. The shield must be quickly bonded to this ground. Once the noise is "on" the ground, it's also on all the signals -- this is good -- it acts as a Faraday cage as well, so that the entire circuit's potential floats on this level. All the circuit knows is the differences within, which because they are referenced to this (noisy, only in an absolute sense) ground, means nothing to it!
When you combine these ideas, you can meet the question: does it matter if you ground the shield, when it's already inside the box? Probably not -- but -- you can never be too safe, and doing so will only improve performance*.
(*There are no guarantees, when it comes to EMC. Obvious exceptions include badly made connectors that aren't, in fact, fully shielded, so endogenous currents inside the system cause ground loops within the cable -- not much, only a few volts, but at higher current than otherwise, maybe just enough to trash some logic thresholds. Maybe there's a poorly grounded power supply, injecting common mode switching noise into the motherboard and peripherals. Lots of possibilities -- but these, at least, are more of a secondary issue, and much more likely to be solvable with ferrite beads (to reduce RF ground loop currents), because the voltage is already small.)
Also, there's always the caveat: if you can tolerate high BER, you can do some truly nasty things. Like leave off filtering, shielding and maybe ESD altogether.
The last example I tested was a fairly standard USB host on a Linux board. I don't think the driver did any auto-retry whatsoever, it just dropped as soon as it found a malformed packet. Once disrupted, it would sometimes recover the connection automatically, always after a soft reset. I have no idea if that's something under user or kernel control, for such a platform.
If you're doing something as good as TCP/IP, with self-resetting or retrying interfaces, or simply don't care and it's more like UDP, you might not care for all but the most intense RF susceptibility tests.
Susceptibility consists of bombarding the cables (conducted) or system (radiated) with AM modulated CW, over a frequency sweep, inducing a known V on the cables / V/m in the air. So, if noise on the order of a few volts is invading inputs, it would tend to just completely stomp out continuous data flow, during the AM peaks or continuously.
Basic FCC Part 15 or IEC 61000-4-3 tests are around 3V/m, so if you figure a < 1m cable is acting as an antenna and receiving about 3V, it's not too painful to have that divide down accidentally by circuit layout or filtering, and still get a stable threshold.
This is another reason why you can sometimes get away with nasty things. But apply the same rules to an automotive, aero or military application (with 30V/m or more), and you'll find yourself stumped, that your sage advice falls flat on its face.
Tim