One way of achieving a deterministic and a robust system is to use time-triggered software architecture:
https://www.safetty.net/products/publications/pttes
In a time-triggered system there are no interrupts which would interrupt the program execution. There may be a timer-based, deterministic interrupt which will run the actual program. The main program loop will consist only of while (1) loop with the sleep() function which will keep the system in a low-power mode when it is not executing the code. The system is built using state machines and the program will be run to completion within each timer tick.
I've worked with systems written using techniques like those described in that book and found the pervasive use of state machines makes the code tedious to write and maintain.
Since everything is run from a timer interrupt, you have to maintain state yourself between each task invocation and you have to make sure that each task (and each group that runs from the same timer interrupt) doesn't take longer to execute than the timer period. Except for the very first task run when the timer tick interrupt is entered, all other tasks will experience scheduling jitter because the time they start running depends on how long all of the tasks ahead of them in the queue run, and this can vary from one tick to the next.
All-in-all, I prefer to design embedded systems (except for extremely simple ones) using an RTOS with good task synchronization facilities.
yup, with interrupts and tasks competing on shared resources we need mutex (special semaphores) reflecting the relative urgency of these tasks to access the shared resource.
This makes things more complex and more difficult to be analyzed.
Most of the time the combination works fine, and you can believe your system works. However, very infrequently, it is also possible for an interrupt to occur in the way that it can introduce the perfect scenario of priority inversion and ...
... ops, Shit Happens ... if you aren't aware of consequences, and able to sort it out.
~ ~ ~ ~ ~ ~ ~ ~
interrupts + shared resourced(on mutex) = more cares
You capture the time-of-arrival of an event in a hardware counter, and read that time when you get around to processing the event.
Modern embedded processors have such facilities built-in and easy-to-use,
Here's an infamous classic that all embedded software engineers need to understand: https://catless.ncl.ac.uk/Risks/19.49.html#subj1
interrupts + shared resourced(on mutex) = more cares
Debugging the Mars Pathfinder on the surface of Mars? Wozzat?
Writing the code is usually a trivial part of the implementation. Deciding what needs to be done, how to do it, and then ensuring that it does it takes far more time and brain power. Validation and verification is usually much more awkward than writing code.
FSMs come with important side benefits
That's rather in the nature of code that shares a single resource, in this case a processor.
QuoteAll-in-all, I prefer to design embedded systems (except for extremely simple ones) using an RTOS with good task synchronization facilities.
That's valid. Of course - whether or not you realise it - you are creating an FSM implemented using the RTOS facilities. Personally I prefer to go the whole hog and code FSMs explicitly.
And whenever possible I like RTOS features to be implemented in hardware, e.g. http://www.xmos.com/download/private/xCORE-Architecture-Flyer(1.1).pdf
The architecture is unique, but I don't think it is niche. I think it confronts head-on the problems that are arising with modern semiconductor technology, and deals with them very effectively. Most architecture and tools skirt around the modern problems, wishing they weren't there, and blaming the developer for not understanding all the arcane caveats in the specification and implementation.
Yet millions of embedded applications are successfully developed every year with traditional architectures.
Yet millions of embedded applications are successfully developed every year with traditional architectures.so, this theory smells interesting and millions of flies are right because they fly on a common poop on the grass
Concerning Interrupts vs FSM vs RTOS.
If you want the lower possible latency, you have to use interrupts.
If you don't need urgency them either FSM or cooperative scheduler or RTOS will work. They all will produce jitter and won't have perfect timing. It is impossible to have perfect timing on a single CPU for several tasks. Nor do you need it.
RTOS consumes lots of resources and forces you to do lots of manual synchronization between tasks. It cannnot run on smaller MCUs at all.
Cooperative scheduler is much more lightweight and at the same time more efficient. Would be my best choice most of the time.
The architecture is unique, but I don't think it is niche. I think it confronts head-on the problems that are arising with modern semiconductor technology, and deals with them very effectively. Most architecture and tools skirt around the modern problems, wishing they weren't there, and blaming the developer for not understanding all the arcane caveats in the specification and implementation.
Yet millions of embedded applications are successfully developed every year with traditional architectures. XMOS seems to have only made inroads in the voice and audio arenas. Perhaps I'll consider them if they start to become more mainstream.
Nobody would claim that XMOS is the only game in town. But it is the only one I have seen directly addressing using multiple independent cores. There aren't millions of such applications, partly there are many infrequent latent bugs when using C/C++ and shared memory.
As for mainstream, that's difficult to quantify. Certainly XMOS has big powerful backers and lots of investment, and they have been developing and shipping this architecture for a decade (based on fundamental concepts from 30/40 years ago).
If you don't need urgency them either FSM or cooperative scheduler or RTOS will work. They all will produce jitter and won't have perfect timing. It is impossible to have perfect timing on a single CPU for several tasks. Nor do you need it.
Nobody would claim that XMOS is the only game in town. But it is the only one I have seen directly addressing using multiple independent cores. There aren't millions of such applications, partly there are many infrequent latent bugs when using C/C++ and shared memory.
Some applications do need the timing guarantees that XMOS supports, but those are probably in the minority. For such applications I've usually resorted to pairing a microcontroller with an FPGA where the FPGA does the heavy lifting and provides more flexibility than XMOS can. XMOS, as an integrated solution, is probably more cost effective for high-volume products.
QuoteAs for mainstream, that's difficult to quantify. Certainly XMOS has big powerful backers and lots of investment, and they have been developing and shipping this architecture for a decade (based on fundamental concepts from 30/40 years ago).
Perhaps. I think you're probably referring to the Transputer from the 1980s here. I think David May had something to do with that and the XMOS architecture. The transputer was certainly hyped in its day, but it kind of ran out of steam in the early 1990s and fell off the map. I hope XMOS doesn't suffer that fate.
There are some techniques which can be used to reduce or eliminate the task jitter:
I'll bet it is orders of magnitude worse than with the xCORE processors. The £12 board I am using will respond to 8 different inputs simultaneously within <100ns. Guaranteed.
I am getting guaranteed perfect timing for two hard real time inputs (capturing and counting the transitions in two 62.5Mb/s data streams) plus front panel buttons and LCD, plus USB comms with a PC.
There are some techniques which can be used to reduce or eliminate the task jitter:...
If you make good use of your chip's peripheral, you don't really need to get to the processing immediately - periphery has buffers, may be able to tolerate significant delays, so jitter will not pose much problems.
Of course, you may have something time critical which cannot tolerate any delays, but you cannot have very many of these, otherwise they will interfere with each other. Such time critical tasks cannot be treated just as others. You have to process them in high priority interrupts, possible write the interrupts in assembler to avoid C prologue/epilogue. There's no techniques which will help you with this - nothing you can do can give your interrupts better latency than your hardware can provide.
I'll bet it is orders of magnitude worse than with the xCORE processors. The £12 board I am using will respond to 8 different inputs simultaneously within <100ns. Guaranteed.
I am getting guaranteed perfect timing for two hard real time inputs (capturing and counting the transitions in two 62.5Mb/s data streams) plus front panel buttons and LCD, plus USB comms with a PC.Quote
Of course, if you have multiple cores they won't interfere with each other. The same way as if I had multiple MCUs on a board where each of them works on its own time-critical task without any interference.
However, the techniques for achieving multitasking in such a system would be completely different than with a single MCU. You don't need to lift a finger to ensure simultaneous action - each CPU does its own job.
Implementing multitasking on a single CPU is a little bit more complicated. You can use several different techniques. These techniques help you avoid adding extra CPUs to your system. IMHO, this thread is about such techniques.
as an architectural design pattern realised in a common language.
There are some techniques which can be used to reduce or eliminate the task jitter:
Yes, for example you can run only one task per interrupt. One timer interrupt - you run task one. Second time interrupt - you run task two. And so on. And you repeat this loop over and over again. The jitter will be minimal.
But what is the reason to have less jitter?
<snip>
as an architectural design pattern realised in a common language.
Exactly the point.
Perhaps those who like the C language will have a shock reading a paper like this![]()
![]()
as an architectural design pattern realised in a common language.
Exactly the point.
Perhaps those who like the C language will have a shock reading a paper like this![]()
![]()
as an architectural design pattern realised in a common language.
Exactly the point.
Perhaps those who like the C language will have a shock reading a paper like this![]()
![]()
not shocking by any meansI already pondered on many of the problems described in the paper, and i have never ever used an RTOS or used anything else than C/basic for microcontrollers.
Just state machines and interrupt-enabled code.
one just have to spend a single minute to think and wonder at what would happen if the variable he's using -not in accumulators or onto the stack- changes its value during a functions, because DMA or a piece of code during interrupts altered the memory content... i think it was even mentioned by my high-school (!) teachers once or twice