If I understood the Apollo thing correctly, a simplified equivalent problem would be wiring a signal directly into an interrupt pin of a CPU, and not thinking about maximum interrupt rate that can occur, during normal or less-than-normal conditions.
It's good the complete system was able to cope, but it came with the expense of other processes; this is a last-resort attempt to save the day, due to failure at lower level, at which the problem would have been much easier to deal with.
Really the right thing to do is to condition/process the signal on hardware level: for example, use a timer peripheral to count the cycles; something which can deal with whatever pulse rate. Failing to have hardware for that and having to resort to CPU interrupt, then the only sane thing you can do is to turn off the interrupt source for some time, and re-enable it with a timer, setting a maximum limit for interrupts. But then you need something else to detect the "unexpected pulses between allowed interrupt time window" error case.
In modern microcontrollers, the key is the good availability of peripherals. For example, STM32F334 HRTIM has one asynchronous input, which can be configured to asynchronously drive outputs, bypassing the synchronization delays and jitters we have been talking; exactly because they are significant in most demanding DC/DC control applications.
Counting cycles of processing is a red herring. Most often the instruction timing (or interrupt latency) is not the source of unexpected jitter in microcontroller systems. NorthGuy completely missed my point, challenging the claim about 12-cycle Cortex-M7 interrupt latency. I was purposely not talking about the complete system, because the xCORE sales guy* isn't doing that, either. That interrupt latency is what is being compared to counting instruction cycles on a "simple" core running a blocking wait-for-event instruction. But it's totally the wrong metric. Neither the Cortex-M7, nor the xCORE are actually completely predictable and jitter-free, because they need to interface with the external world, and this interface is almost always asynchronous, yet the CPU is synchronous (to its own clock).
And this is the point if someone still missed it: a Cortex-M7 predicting a branch and "unexpectedly" saving 5 nanoseconds is totally the same order of magnitude as synchronization jitter is! The claims about interrupt jitter being in range of thousands of ns due to caches, backed up by measurements of application processors, does not apply to microcontrollers at all: it's a classic strawman argument, purpose of which is to confuse the reader.
Repeat after me until understood: caches do not apply to microcontrollers. Caches do not apply to microcontrollers. Even if they have caches available. Usage of caches is not mandatory. Microcontrollers come with fast memory. Microcontrollers allow running ISRs out of fast memory. Cortex-A CPUs are not microcontrollers. Look at what the name of this subforum is. Look what the "Subject:" line says. Got it?
*) yes, someone PM'd me some more references, which made me even more convinced. I might still be wrong, though; it's entirely possible to look like a duck, quack like a duck, and still not be a duck.