QuotePIC16 doesn't have nested interrupts. Theoretically, you can do it in software, but it is so inefficient that it certainly isn't worth it.If some of your ISR are very long,...
What the hell is "programming ecosystem"?
Context switches with hard floating point can get pretty expensive because the floating point registers have to be saved/restored too.
The Cortex-M4 and M7 have a feature called lazy stacking. If the outgoing task didn't use the FPU (as indicated by a bit in the link register), the context switch code doesn't need to save the floating point registers.
The Cortex-M4 and M7 have a feature called lazy stacking. If the outgoing task didn't use the FPU (as indicated by a bit in the link register), the context switch code doesn't need to save the floating point registers.
I think it still adjusts the stack pointer as if they were saved, right?
Tggzzz. Let me guess. Are you by any chance a senior level engineer who manages a bunch of immigrant programmers who write all the code, and you spend 90% of your time learning ways to make them feel inferior and otherwise boosting your perceived value to the guy writing the checks?
There are many instances where you might want to do things in the ISR. One of the main purposes, for example, for one of my project is to get 2 ADC reads from single randomly triggered signals but which must happen in a small microsecond window. In this case, it is easier to put the ADC acquisition time in the ISR and has no significant detriment to the rest of the code. Yes, I could have done ADC acquisition delay as a timer interrupt. But this was not a necessary complication. I could have even dedicated a micro just to get these reads and have the master request this data at its leisure. But it was not necessary.
As I said this is unusual situation where you might want to do this. If all your ISR is super short, then priority doesn't matter anyway, does it? They all get done in plenty of time, since they only take a few instructions cycles to complete.
If you want to treat your coders as disposable, then yes. It is better for them to follow your mandate to the letter, despite all the other problems and inefficiencies it may cause. You can dictate what hardware and ide they must use and how to write the code to the letter. This way you can fire them and plug in another brown/yellow engineer and continue with your bug ridden, bloated, overbudget project (sometimes this ends up the best or only solution, for complexity and/or security reason, but as engineers we don't have to LIKE it; and I bet a heck of a lot of the visitors/participants on this forum are doing one man projects as the norm). But if you have coder you can depend on to do the project start to finish and support/maintenance, following strict mandate with no room for change may not be the best way to go. Believe it or not, he may have a better way to get it done. Sometimes exceptions to the rule can have more pros than cons.
Well, that isn't a long time, and the ADC readings are the necessary context, so I don't see any relevance to what I wrote.
For the avoidance of doubt, the context you decided to snip was that you wrote "If some of your ISR are very long,...". My statements were in that context, and they stand.
It wouldn't matter how long/short it was if the system jammed due to priority inversion - in which case priority would definitely matter.
That's an irrelevant strawman rant.
QuotePIC16 doesn't have nested interrupts. Theoretically, you can do it in software, but it is so inefficient that it certainly isn't worth it.If some of your ISR are very long,...
If your ISRs are very long, then you are "Doing Something Very Wrong"TM.
An ISR should
- determine the source of the interrupt
- gather neccessary context
- create an Event containing the necessary context
- submit that Event for consumption and processing by a background task/thread/process, probably via a FIFO or mailbox
- and exit the ISR A.S.A.P.
they were shown the door very quickly
QuoteIt wouldn't matter how long/short it was if the system jammed due to priority inversion - in which case priority would definitely matter.
If priority inversion causes bug (for example someone brought up the Mar's Rover, where the bug was, IIUC, that a lower priority interrupt never finished if it was interrupted), then presumably some of the interrupts ARE relatively long. Prioritizing interrupts obviously takes some extra code and memory and uses up stack and creates potential for RARELY SEEN BUG.* In the Mars Rover case, who knows how bad the screwup was? It could have been complete and utter laziness. Giving it a fancy name doesn't mean it wasn't necessarily a grievously stupid oversight. And if ISRs are all very short, you can just NOT do it.
QuoteThat's an irrelevant strawman rant.What I should have meant was a metaphorical "you." Or "one." I got carried away, and I apologize, Tggzzzz.
*This seems to be one of your many issues with interrupts. But I don't understand why this can't be tested. You can create this on demand with testing setup. Sure, it takes some work, but what doesn't?
they were shown the door very quicklyI frankly appreciate a lot when you talk, your English sounds so elegant that I find it a pleasure to read, but I also appreciate the concept you expose since I work in avionics and we have a lot of problems in common.
Someone here simply assumes that he/she can access everything everytime everywhere for free. Like when he/she clones a git repository in a couple of seconds. That's the problem of "opensource" which always introduce a distorted prospect of the reality, because of course, these kind of people believe they can access concept online, thus they have no respect when someone has the intention to talk.
The problem of modern computer science is also that even a nogooder can access internet and claim that he/she can programs something. It should be proved, in first place. I see Km of fsking garbage online.
But I can't ignore that someone here comes from a more relaxed area, like Home automation, where you have light constraints and requirements to be respected in your code and you are 100% guaranteed that nobody will ever die even if you write poor quality code: never thought that these kind of people have ever felt the forum talk on mission critical field as the quality of being arrogantly superior and disdainful against them.
Thus, let me summarize: you spend your coffee-break time contributing on things that usually kept in-doors, therefore cost money on courses and consulting, and people believe that it's a manifest of your arrogantly superior.
Does it make sense? For me, they can have shown the door very quickly.
Nested interrupts are quite handy
Nested interrupts are quite handy
In avionics, we are usually not allowed to use nested interrupts. In this moment I am writing the final report (hopefully the last one before vacation), and I have to spend time on a specific set of test cases whose meaning is: assuring that the firmware inside the fly-board doesn't use nested-interrupt since it's a low-level requisite committed by the final user.
Nested interrupts are quite handyIn avionics, we are usually not allowed to use nested interrupts.
When something has to explicitly disallowed, it is usually useful.
QuoteIf you have only one CPU, it is absolutely impossible for these two tasks to run at the same time. What you can do about this? You can run task A. This increases the latency of task B, and it is no longer equal to interrupt latency, but is equal to (interrupt latency + time to do the minimum processing for task A). Or, you can run task B, then the latency of task A will not be zero any longer, but will be equal to the time necessary to do some minimum processing for task B. If neither of these meets your timing requirements, nothing you can do. This is interference caused by multitasking. A can be served alone, B can be served alone, but A and B together cannot.
Not necessarily: providing constraints are met, interference can be absent. Meeting constraints can be aided/prevented by appropriate hardware+software mechanisms.
The XS1 architecture is event-driven. It has an instruction that can dispatch an external events in addition to traditional interrupts. If the program chooses to use events, then the underlying processor has to expect an event and wait in a specific place so that it can be handled synchronously. If desired, I/O can be handled asynchronously using interrupts. Events and interrupts can be used on any resource that the implementation supports.
Based on that I select the appropriate hardware and decide on what priorities each task gets.
Anyhow i sometimes try to use DMA sequences to offload blocking situations, i wish ST could
have made their DMA engine abit clever by implementing some sort of a tiny programmable
sequenciator as to behave more like a very simple coprocessor.
Anyhow i sometimes try to use DMA sequences to offload blocking situations, i wish ST could
have made their DMA engine abit clever by implementing some sort of a tiny programmable
sequenciator as to behave more like a very simple coprocessor.