Talking of protocol decoders, IMHO the I2C decoding is not implemented optimally, as it seems to be based on clock edges while the protocol defines e.g. that the data can't change in the high period. Which means that the ideal point for sampling data is amidst the high period. Hardware implementations of I2C tend to change the data exactly after the clock goes low and the HMO decoder doesn't like this. If the sample rate is not high enough to have two separate samples for the clock going low and the data line changing its level, it marks the bit as faulty. This means you need a much higher sample rate to decode I2C with an HMO scope than you would normally need. E.g. >20MS/s to decode a 400kHz signal. My cheap ZeroPlus logic analyzer can easily decode the same signal with 1 or 2MS/s.
Unfortunately, the R&D guy didn't seem to share my concerns about this. Or to be more exact, as I reported issues with I2C decoding of a perfectly valid I2C signal, he sent me an (outdated) I2C spec and claimed "my" I2C timing was too critical. However, this was the hardware I2C of an NXP µC and after all NXP is Philips and Philips invented the I2C bus. Besides, the spec doesn't define any timing between falling clock edge and data edge as it just expects the data to be sampled within the clock high phase, not at the falling edge.
Yet, as mentioned before, my answer to his mail was ignored. So it's hard to tell if they at least are aware of that issue internally.