Author Topic: Best MCU for the lowest input capture interrupt latency  (Read 19662 times)

0 Members and 1 Guest are viewing this topic.

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3200
  • Country: ca
Re: Best MCU for the lowest input capture interrupt latency
« Reply #75 on: April 06, 2022, 05:05:05 pm »
Good thing about theory is that it matches with reality. Otherwise, theory is faulty.

ARM Cortex-M7 interrupt latency is guaranteed by ARM, by design. There is no need for me to test it.

Well, there's some time needed to sync and route the external signal to ARM. And then there will be some slowing down due to execution from non-cached memory. And there's further delays with getting the signal out of the CPU and syncing it to the output flops/latches.  Because of the above considerations, the reality will also depend on the vendor, not only ARM. Your theory doesn't take any account of these factors, so it is clear that the theory is faulty.

Of course, if you're not interested in reality, there's no need to test.

 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3200
  • Country: ca
Re: Best MCU for the lowest input capture interrupt latency
« Reply #76 on: April 06, 2022, 05:15:31 pm »
Code: [Select]
[quote author=jemangedeslolos link=topic=318628.msg4104136#msg4104136 date=1649255614]
I have the exact same result with INT1 ( external interrupt 1 )  instead of IC1 ( input capture 1 ).
With OC1CON2.OCTRIG = 0, the results are as expected but I can get it working with OC1CON2.OCTRIG = 1.
OC1 interrupt fires only ont time and never fires again.

What happens when you set OCTRIG = 1 without changing anything else?

What do you do in your OC1 ISR?

The interesting thing is that the latency is lower with external interrupt. I measure only around 50ns between input and output rising edges instead of 80ns with input capture.

IC needs to capture the timer and store the result in the IC buffer. Only after the result is saved, it signals the interrupt. INT generates the interrupt request right away. Some chips have hard-wired dedicated INT pins (as opposed to PPS'ed ones) which are even faster because they remove the PPS delay.
 

Offline Sal Ammoniac

  • Super Contributor
  • ***
  • Posts: 1741
  • Country: us
Re: Best MCU for the lowest input capture interrupt latency
« Reply #77 on: April 06, 2022, 08:40:57 pm »
Another historical comparison: the Apollo guidance computer used on the moon landing missions. This was certainly an embedded application, and it controlled a spacecraft hurtling towards the moon at thousands of MPH in real-time with little input from the astronauts. The AGC ran at perhaps 1 MHz and had 2K words of RAM and 36K of fixed storage, and most of the guidance and autopilot code was written in an interpreted language that ran even slower than native code. Cycle times were measured in milliseconds, not nanoseconds, yet the whole thing worked fine for its intended purpose. It was able to do this because even with a 100 millisecond control loop it was able to maintain control of the vehicle. Even if they had a modern MCU back then, they probably wouldn't have run the control loop any faster (because it wasn't necessary).

Er, not quite.

You should look up "program alarm 1202".

Good defensive system design saved the day; the 25 year old controllers made the right call to ignore that alarm.

I'm not sure what you're implying by your comment "er, not quite"... Not quite what? The fact that the computer gracefully handled the problem? That has nothing to do with the fact that the computer was primitive, slow, and lacking in memory by modern standards, yet still got the job done.

I don't need to look up program alarm 1202. I'm very familiar with the architecture of the AGC hardware and software and know exactly what it means.

Specifically, for those interested, a 1202 program alarm occurs because the software used "core sets", 12-word chunks of erasable memory used by the executive to manage jobs (analogous to a task control block in a modern RTOS). The LM AGC code only had eight core sets available, and if the executive tried to start a new job and no core sets were available, it generated a 1202 program alarm. The similar 1201 program alarm was caused by no available VAC (vector accumulator) areas when creating an interpretive job. Both the 1201 and 1202 alarms were due to jobs not being able to complete because something was "stealing" time.

This "time stealing" turned out to be coming from the rendezvous radar. The rendezvous radar is not using during a landing, but Aldrin turned it on anyway just in case they had to abort and navigate back to the CSM in orbit. The rendezvous radar had two axes of movement and each axis had a resolver feeding a CDU (coupling data unit--essentially an analog-to-digital converter). The CDU read the analog signals from the resolvers and converted them into digital pulses it fed to the AGC. When a pulse occurred, the AGC hardware incremented or decremented a counter by stealing a cycle from the CPU. The problem was an oversight--the rendezvous radar used a different 800 Hz reference oscillator than the computer, and the two where out of phase, and this caused a pulse storm of spurious pulses to the AGC. Each spurious pulse stole a CPU memory cycle of ~12 microseconds and this placed about a 15% extra load on the CPU, which caused the overload, and the program alarms.

As you mentioned, the designers had put (at the insistence of NASA) restart code into the software that would cause a soft reset and shed any low priority jobs (like updating the DSKY, which is why it blanked out for ten seconds during all this) while retaining all the important stuff, like the digital autopilot. Restarts happened so quickly that Armstrong didn't even notice any change in the handling of the LM (because all the important stuff was still running).

BTW, if anyone is interested in reading the AGC source code, you can find it here: https://github.com/chrislgarry/Apollo-11

"Luminary" is the code running on the Apollo 11 LM computer.
« Last Edit: April 06, 2022, 08:46:33 pm by Sal Ammoniac »
"That's not even wrong" -- Wolfgang Pauli
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20000
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Best MCU for the lowest input capture interrupt latency
« Reply #78 on: April 06, 2022, 09:21:10 pm »
Running out of available processing power and having to abort processing indicates that it didn't "work fine". The pilots and ground control were distracted at a critical time. Fortunately good defensive system design saved the day.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Sal Ammoniac

  • Super Contributor
  • ***
  • Posts: 1741
  • Country: us
Re: Best MCU for the lowest input capture interrupt latency
« Reply #79 on: April 06, 2022, 09:39:40 pm »
Running out of available processing power and having to abort processing indicates that it didn't "work fine". The pilots and ground control were distracted at a critical time. Fortunately good defensive system design saved the day.

It would be analogous to a modern embedded system connected to an external peripheral with an interrupt input to the CPU that kept firing continuously and preventing the CPU from doing anything other than servicing the interrupt. You'd have to say that system wasn't "working fine" either.

You'd hope issues like this would have been caught in ground testing, but that didn't happen.
"That's not even wrong" -- Wolfgang Pauli
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20000
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Best MCU for the lowest input capture interrupt latency
« Reply #80 on: April 06, 2022, 10:14:15 pm »
Analogies are usually dangerous, since they encourage over thinking the analogy, not the real issue.

It didn't work fine; it coped - just.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Sal Ammoniac

  • Super Contributor
  • ***
  • Posts: 1741
  • Country: us
Re: Best MCU for the lowest input capture interrupt latency
« Reply #81 on: April 06, 2022, 11:07:48 pm »
It didn't work fine; it coped - just.

Ah, so any embedded systems that use error recovery code to recover from problems don't work fine, they "just cope". Got it.
"That's not even wrong" -- Wolfgang Pauli
 
The following users thanked this post: uer166

Offline PCB.Wiz

  • Super Contributor
  • ***
  • Posts: 1670
  • Country: au
Re: Best MCU for the lowest input capture interrupt latency
« Reply #82 on: April 07, 2022, 03:01:06 am »
Hello again,

I have the exact same result with INT1 ( external interrupt 1 )  instead of IC1 ( input capture 1 ).
With OC1CON2.OCTRIG = 0, the results are as expected but I can get it working with OC1CON2.OCTRIG = 1.
OC1 interrupt fires only ont time and never fires again.

The interesting thing is that the latency is lower with external interrupt. I measure only around 50ns between input and output rising edges instead of 80ns with input capture.
Did you check for edge filter options in input capture ?
- but you are right, it would be rare for a SW path, to be faster than a configured HW path.

Also keep in mind "latency is lower with external interrupt" is highly dependent on what the MCU is doing at the time of interrupt.

An idling MCU would expected to respond the fastest, whilst a MCU executing one of the slower opcodes, has to complete that first, & a MCU in another interrupt has to complete that, unless your event INT has highest priority.
(in that case, you added jitter and latency to the over-ruled interrupt)




 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3200
  • Country: ca
Re: Best MCU for the lowest input capture interrupt latency
« Reply #83 on: April 07, 2022, 03:36:46 am »
Also keep in mind "latency is lower with external interrupt" is highly dependent on what the MCU is doing at the time of interrupt.

This is not a real interrupt - just a signal which triggers the other peripheral - this happens independently of the MCU.

However, if it was the interrupt, with this MCU the latency would be the same every time. Actually, the MCU has a setting for that. You can chose constant latency (default) or you can chose faster, but variable latency.
 

Offline jemangedeslolosTopic starter

  • Frequent Contributor
  • **
  • Posts: 386
  • Country: fr
Re: Best MCU for the lowest input capture interrupt latency
« Reply #84 on: April 07, 2022, 07:26:01 am »
Hello again,

I have the exact same result with INT1 ( external interrupt 1 )  instead of IC1 ( input capture 1 ).
With OC1CON2.OCTRIG = 0, the results are as expected but I can get it working with OC1CON2.OCTRIG = 1.
OC1 interrupt fires only one time and never fires again.

The interesting thing is that the latency is lower with external interrupt. I measure only around 50ns between input and output rising edges instead of 80ns with input capture.

What happens when you set OCTRIG = 1 without changing anything else?

What do you do in your OC1 ISR?

The interesting thing is that the latency is lower with external interrupt. I measure only around 50ns between input and output rising edges instead of 80ns with input capture.

IC needs to capture the timer and store the result in the IC buffer. Only after the result is saved, it signals the interrupt. INT generates the interrupt request right away. Some chips have hard-wired dedicated INT pins (as opposed to PPS'ed ones) which are even faster because they remove the PPS delay.

Oh sorry I made a typo, I meant "I can't get it working with OC1CON2.OCTRIG = 1"
So with OC1CON2.OCTRIG = 1 and OC1CON1.OCM  = 0b101, I have continuous 20us pulse on OC1
Don't look at noise and crosstalk on channel 1, I'm using a Digilent Analog Discovery 2 without shielded probe :



And with OC1CON2.OCTRIG = 1 and OC1CON1.OCM = 0b010, OC1 interrupt fires only one time and never fires again.
TRIGMODE setting change nothing to the story and Im doing almost nothing inside the OC1 interrypt handler :

Code: [Select]
#INT_OC1
void  oc1_isr(void)
{
   //delay_cycles(16);
   
   //output_high(OUT1);
   //delay_us(1);
   //output_low(OUT1);
   
   //OC1CON2.TRIGSTAT = 0;
   //OC1TMR           = 0;
   //OC1CON1.OCM      = 0b010;
   
   OC1IT_Flag = TRUE;
   
   IFS0.OC1IF = 0;
}

I left a few lines of commented code that I tested but that does not change anything either.
 

Offline jemangedeslolosTopic starter

  • Frequent Contributor
  • **
  • Posts: 386
  • Country: fr
Re: Best MCU for the lowest input capture interrupt latency
« Reply #85 on: April 07, 2022, 07:35:49 am »
And to be very clear, without changing anything else, here is a screenshot with OC1CON2.OCTRIG = 1 and OC1CON1.OCM  = 0b101
The input pulse on IC1 is 500Hz 10% :



And with input pulse > OC1 period ( 2kHz 10% ) :

 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20000
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Best MCU for the lowest input capture interrupt latency
« Reply #86 on: April 07, 2022, 07:39:06 am »
It didn't work fine; it coped - just.

Ah, so any embedded systems that use error recovery code to recover from problems don't work fine, they "just cope". Got it.

No, any embedded system that repeatedly invokes error recovery - especially in a critical situation - isn't working fine. If it is a timing problem then it is just coping.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21972
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Best MCU for the lowest input capture interrupt latency
« Reply #87 on: April 07, 2022, 12:56:25 pm »
Ah, so any embedded systems that use error recovery code to recover from problems don't work fine, they "just cope". Got it.

No, any embedded system that repeatedly invokes error recovery - especially in a critical situation - isn't working fine. If it is a timing problem then it is just coping.

Wow, that's crazy!  TIL literally every WiFi radio ever -- really, any networking device at all, CD player, hard drive, etc. literally can't work fine, in addition to "just coping"!

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20000
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Best MCU for the lowest input capture interrupt latency
« Reply #88 on: April 07, 2022, 01:17:11 pm »
Ah, so any embedded systems that use error recovery code to recover from problems don't work fine, they "just cope". Got it.

No, any embedded system that repeatedly invokes error recovery - especially in a critical situation - isn't working fine. If it is a timing problem then it is just coping.

Wow, that's crazy!  TIL literally every WiFi radio ever -- really, any networking device at all, CD player, hard drive, etc. literally can't work fine, in addition to "just coping"!

I should have written "internal fault recovery", not "error recovery". Coping with external packet loss and Reed-Solomon codes is normal operation.

Intermittently telling the user "failed to do what you requested" is not normal operation.
« Last Edit: April 07, 2022, 01:18:56 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Sal Ammoniac

  • Super Contributor
  • ***
  • Posts: 1741
  • Country: us
Re: Best MCU for the lowest input capture interrupt latency
« Reply #89 on: April 07, 2022, 03:39:14 pm »
It didn't work fine; it coped - just.

Ah, so any embedded systems that use error recovery code to recover from problems don't work fine, they "just cope". Got it.

No, any embedded system that repeatedly invokes error recovery - especially in a critical situation - isn't working fine. If it is a timing problem then it is just coping.

Semantics. The software was working as designed; the hardware wasn't.
"That's not even wrong" -- Wolfgang Pauli
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20000
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Best MCU for the lowest input capture interrupt latency
« Reply #90 on: April 07, 2022, 04:33:47 pm »
It didn't work fine; it coped - just.

Ah, so any embedded systems that use error recovery code to recover from problems don't work fine, they "just cope". Got it.

No, any embedded system that repeatedly invokes error recovery - especially in a critical situation - isn't working fine. If it is a timing problem then it is just coping.

Semantics. The software was working as designed; the hardware wasn't.

So you believe it was designed to run out of processing power at a critical point in the flight, alarm/distract the pilots, confuse CAPCOM, and leave it to a 25yo on the ground to say "ignore the alarm"?

I think not.

AUAI the hardware (both computer and external) was not faulty and was working as designed.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3200
  • Country: ca
Re: Best MCU for the lowest input capture interrupt latency
« Reply #91 on: April 07, 2022, 05:28:48 pm »
And to be very clear, without changing anything else, here is a screenshot with OC1CON2.OCTRIG = 1 and OC1CON1.OCM  = 0b101

I assume these are with OCTRIG = 0 (that's what it seems from the pictures), otherwise you would get the 20 us pulses as in the screenshot from the previous post.

On the 20 us pulses - looks like the module gets triggered immediately after the pulse.

This means that something must drive TRIGSTAT low at the end of the pulse. This is either because you set TRIGMODE = 1 or something else in your code. This is how it should be.

But the OC must not re-trigger immediately, it must wait for the next trigger. I cannot tell what re-triggers it. But you need to find out. I have two ideas:

1) This may be somehow related to the trigger source, e.g. changing from IC to INT may somehow change this.

2) The falling edge of the OC output couples onto the input line, which produces high enough voltage level to cause a new IC trigger.

I don't know why you get only one OC interrupt. However, if your chip has TRIGMODE you don't need the OC interrupt anyway.
 

Offline jemangedeslolosTopic starter

  • Frequent Contributor
  • **
  • Posts: 386
  • Country: fr
Re: Best MCU for the lowest input capture interrupt latency
« Reply #92 on: April 07, 2022, 07:04:19 pm »
And to be very clear, without changing anything else, here is a screenshot with OC1CON2.OCTRIG = 1 and OC1CON1.OCM  = 0b101

I assume these are with OCTRIG = 0 (that's what it seems from the pictures), otherwise you would get the 20 us pulses as in the screenshot from the previous post.

On the 20 us pulses - looks like the module gets triggered immediately after the pulse.

This means that something must drive TRIGSTAT low at the end of the pulse. This is either because you set TRIGMODE = 1 or something else in your code. This is how it should be.

But the OC must not re-trigger immediately, it must wait for the next trigger. I cannot tell what re-triggers it. But you need to find out. I have two ideas:

1) This may be somehow related to the trigger source, e.g. changing from IC to INT may somehow change this.

2) The falling edge of the OC output couples onto the input line, which produces high enough voltage level to cause a new IC trigger.

I don't know why you get only one OC interrupt. However, if your chip has TRIGMODE you don't need the OC interrupt anyway.

grrrrr sorry, typo again  :palm:
You are right, the last screenshots are with OCTRIG = 0.

Thank you very much for your help and your ideas.
I will investigate tomorrow with my real oscilloscope.
I'm using a homemade dev board that I've had for several years. I assumed everything was fine on the hardware side because I've never had a problem with it.
I attributed the noise to be related to the Analog Discovery unshielded wires.

I will keep you in touch  ;)
 

Offline Sal Ammoniac

  • Super Contributor
  • ***
  • Posts: 1741
  • Country: us
Re: Best MCU for the lowest input capture interrupt latency
« Reply #93 on: April 07, 2022, 08:27:52 pm »
It didn't work fine; it coped - just.

Ah, so any embedded systems that use error recovery code to recover from problems don't work fine, they "just cope". Got it.

No, any embedded system that repeatedly invokes error recovery - especially in a critical situation - isn't working fine. If it is a timing problem then it is just coping.

Semantics. The software was working as designed; the hardware wasn't.

So you believe it was designed to run out of processing power at a critical point in the flight, alarm/distract the pilots, confuse CAPCOM, and leave it to a 25yo on the ground to say "ignore the alarm"?

Of course it wasn't. The design goal was to use a maximum of 85% of the available CPU, and they did meet that goal. A hardware issue outside the computer caused counter events that pushed the CPU usage to 100%, which was the result of a hardware design issue that wasn't caught in testing. How should this have been handled by the software design team? By allowing more than the 15% margin they did? That wasn't feasible considering how resource-constrained the design was. The whole thing, which never had been done before, was a team effort, and it resulted in successful landings on every attempt, despite issues like the program alarms on Apollo 11 and the abort discrete issue on Apollo 14. The teams found effective workarounds (ignore the alarms on 11 and fool the computer into thinking it was already in abort mode on 14) under great pressure and time constraints.

But I'm sure a British moon mission will do much better and have a much better designed and tested computer system that will do more than just cope... On the other hand, considering my experience with the electrical systems on British cars, the rocket probably won't get off the ground. :-DD
"That's not even wrong" -- Wolfgang Pauli
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20000
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Best MCU for the lowest input capture interrupt latency
« Reply #94 on: April 07, 2022, 09:10:48 pm »
Of course it wasn't. The design goal was to use a maximum of 85% of the available CPU, and they did meet that goal. A hardware issue outside the computer caused counter events that pushed the CPU usage to 100%, which was the result of a hardware design issue that wasn't caught in testing. How should this have been handled by the software design team? By allowing more than the 15% margin they did? That wasn't feasible considering how resource-constrained the design was. The whole thing, which never had been done before, was a team effort, and it resulted in successful landings on every attempt, despite issues like the program alarms on Apollo 11 and the abort discrete issue on Apollo 14. The teams found effective workarounds (ignore the alarms on 11 and fool the computer into thinking it was already in abort mode on 14) under great pressure and time constraints.

But I'm sure a British moon mission will do much better and have a much better designed and tested computer system that will do more than just cope... On the other hand, considering my experience with the electrical systems on British cars, the rocket probably won't get off the ground. :-DD

That last comment is revealing.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline SpacedCowboy

  • Frequent Contributor
  • **
  • Posts: 292
  • Country: gb
  • Aging physicist
Re: Best MCU for the lowest input capture interrupt latency
« Reply #95 on: April 08, 2022, 01:42:30 am »
But I'm sure a British moon mission will do much better and have a much better designed and tested computer system that will do more than just cope... On the other hand, considering my experience with the electrical systems on British cars, the rocket probably won't get off the ground. :-DD

Wow. I can only assume some Brit pissed in your cornflakes this morning. Just ... wow.
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1659
  • Country: nl
Re: Best MCU for the lowest input capture interrupt latency
« Reply #96 on: April 08, 2022, 07:12:45 am »
I should have written "internal fault recovery", not "error recovery". Coping with external packet loss and Reed-Solomon codes is normal operation.

Intermittently telling the user "failed to do what you requested" is not normal operation.

Aren't those called design limits? Packet error rate can be OK or also very devastating above a certain threshold. A bit error rate: likewise, at some point Reed-Solomon will fail. Not every real-time system is the same. Some are hard real-time.. they can't miss anything and any deadline. Others have softer requirements (like video/audio playback with the infrequent hickup)

There's plenty of research going on that investigate systems that can operate with intermittent behaviour, such as intermittent, stochastic or approximate computing. It's a similar story to the worst-case execution times again: setting a design limit and designing with tighter system tolerances, instead of going overkill.. As we've discussed, the time that CPU cycles were easily countable and 100% deterministic are pretty much gone.

Now personally I'm not a too big fan of introducing a large stochastic property into computational efforts. It's harder to grasp and debug, but I also think it's removing the property where computers are good at: consistent computational power or 'attention' (which has led to super reliable infrastructure such as electrical grids and the internet). Consistent attention.. something opposed to us humans, which can get grumpy if we hadn't had our coffee/tea or move discussions to personal level just so we can evaluate the hasDiscussionBeenWon() to be True.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20000
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Best MCU for the lowest input capture interrupt latency
« Reply #97 on: April 08, 2022, 08:35:49 am »
Snipped sensible points to concentrate on...
 
Now personally I'm not a too big fan of introducing a large stochastic property into computational efforts. It's harder to grasp and debug, but I also think it's removing the property where computers are good at: consistent computational power or 'attention' (which has led to super reliable infrastructure such as electrical grids and the internet). Consistent attention.. something opposed to us humans, which can get grumpy if we hadn't had our coffee/tea or move discussions to personal level just so we can evaluate the hasDiscussionBeenWon() to be True.

I can tolerate that in many cases.

I am less tolerant of the so-called machine learning techniques, which - by design - don't have a known envelope. They are TDD taken to extremes, and presume "you can test quality into a product" :(
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8304
  • Country: fi
Re: Best MCU for the lowest input capture interrupt latency
« Reply #98 on: April 08, 2022, 08:45:10 am »
If I understood the Apollo thing correctly, a simplified equivalent problem would be wiring a signal directly into an interrupt pin of a CPU, and not thinking about maximum interrupt rate that can occur, during normal or less-than-normal conditions.

It's good the complete system was able to cope, but it came with the expense of other processes; this is a last-resort attempt to save the day, due to failure at lower level, at which the problem would have been much easier to deal with.

Really the right thing to do is to condition/process the signal on hardware level: for example, use a timer peripheral to count the cycles; something which can deal with whatever pulse rate. Failing to have hardware for that and having to resort to CPU interrupt, then the only sane thing you can do is to turn off the interrupt source for some time, and re-enable it with a timer, setting a maximum limit for interrupts. But then you need something else to detect the "unexpected pulses between allowed interrupt time window" error case.

In modern microcontrollers, the key is the good availability of peripherals. For example, STM32F334 HRTIM has one asynchronous input, which can be configured to asynchronously drive outputs, bypassing the synchronization delays and jitters we have been talking; exactly because they are significant in most demanding DC/DC control applications.

Counting cycles of processing is a red herring. Most often the instruction timing (or interrupt latency) is not the source of unexpected jitter in microcontroller systems. NorthGuy completely missed my point, challenging the claim about 12-cycle Cortex-M7 interrupt latency. I was purposely not talking about the complete system, because the xCORE sales guy* isn't doing that, either. That interrupt latency is what is being compared to counting instruction cycles on a "simple" core running a blocking wait-for-event instruction. But it's totally the wrong metric. Neither the Cortex-M7, nor the xCORE are actually completely predictable and jitter-free, because they need to interface with the external world, and this interface is almost always asynchronous, yet the CPU is synchronous (to its own clock).

And this is the point if someone still missed it: a Cortex-M7 predicting a branch and "unexpectedly" saving 5 nanoseconds is totally the same order of magnitude as synchronization jitter is! The claims about interrupt jitter being in range of thousands of ns due to caches, backed up by measurements of application processors, does not apply to microcontrollers at all: it's a classic strawman argument, purpose of which is to confuse the reader.

Repeat after me until understood: caches do not apply to microcontrollers. Caches do not apply to microcontrollers. Even if they have caches available. Usage of caches is not mandatory. Microcontrollers come with fast memory. Microcontrollers allow running ISRs out of fast memory. Cortex-A CPUs are not microcontrollers. Look at what the name of this subforum is. Look what the "Subject:" line says. Got it?

*) yes, someone PM'd me some more references, which made me even more convinced. I might still be wrong, though; it's entirely possible to look like a duck, quack like a duck, and still not be a duck.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27355
  • Country: nl
    • NCT Developments
Re: Best MCU for the lowest input capture interrupt latency
« Reply #99 on: April 08, 2022, 09:43:28 am »
And this is the point if someone still missed it: a Cortex-M7 predicting a branch and "unexpectedly" saving 5 nanoseconds is totally the same order of magnitude as synchronization jitter is! The claims about interrupt jitter being in range of thousands of ns due to caches, backed up by measurements of application processors, does not apply to microcontrollers at all: it's a classic strawman argument, purpose of which is to confuse the reader.

Repeat after me until understood: caches do not apply to microcontrollers. Caches do not apply to microcontrollers. Even if they have caches available. Usage of caches is not mandatory. Microcontrollers come with fast memory. Microcontrollers allow running ISRs out of fast memory. Cortex-A CPUs are not microcontrollers.
Actually those application processors (using a Cortex A series as the main CPU) are often paired with a Cortex-M series sub-system where the latter can do hard-realtime stuff.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Siwastaja


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf