It is not the frequency what matters in digital electronics, it is the edge rate (rise/fall time) which is the main factor determining how difficult things are.
Which boils down to a frequency issue again, as Mr. Fourier figured out in an other context in the 18th century. Sharp edges = lots of high frequency components. Failure to properly transmit those high frequency components = kiss your signal goodbye.
Frequency thinking approach is difficult when debugging single events, or something with duty cycles of near zero or 100%.
I once debugged a system where CPU single write accesses sometimes produced two writes on the peripheral. This happened regardless how many writes per time unit we performed. Note that this wasn't any kind of "high frequency" stuff, /WR-pulse width was 100 ns or so. After much head-scratching, it was determined that reason for the write failure was that
edges seen by the peripheral were non-monotonic, and transmission line reflection spoiled the falling edge so that receiving device saw sometimes two writes, one on the falling edge and one on the rising edge. I happen to have a scope screenshot from the event:
As one can see, the signal spends quite a lot of time in undetermined region.
Thus, I think it can be concluded that if a single edge is transmitted properly, then it is irrelevant from the signal integrity viewpoint how many edges per time unit one transmits (clock rate). It is thus often easier to think in terms of edge rates, and relate that to transmission line lengths (termination requirements). If round-trip time is less than 1/10 of rise/fall-time (different limits exist), then the transmission line can be considered as lumped circuit.
Or if one does not have transmission lines in the PCB, one should get them in the first place. In practice this means a contiguous ground plane under the signal traces, with similar dielectric thickness than the trace width, so that trace impedance level drops to reasonable level (something like 80 ohms or below).
Regards,
Janne