I did note write that.
Of course you didn't. I fixed the quotation.
You appear not to understand the relevance of the classic "dining philosophers" problem to your position.
The deadlock problem has nothing to do with real-time performance. Avoiding deadlocks is algorithmic question. Think about it. It is possible to find a solution for the "philosophers" problem before you even think of the hardware platform you're going to use.
We're discussing here real-time systems, which heavily depend on the hardware architecture.
Real-time performance is the ability to act at a specified point of time. When something happens, your program needs to take control and do something. The time between the triggering event and the moment where the program acts is called latency. Sometimes, if your event is predictable (such as periodic timer) you can master zero-latency. Other times, the event is unpredictable and thus zero-latency is impossible. The lowest latency you can master is most likely interrupt latency, or if you can dedicate your entire CPU to this single event, it is the duration of the shortest loop capable of detecting an event.
Now imagine you have two real-time tasks. Say, task A is doing something at fixed period with zero-latency. At the same time, task B is doing something when a rising edge from an external line arrives. Thus task B has interrupt latency. What happens if the events happen as follows:
- a rising edge for task B arrives
- the interrupt latency time passes so that task B has to run now
- at exactly the same time the period for task A expires
If you have only one CPU, it is absolutely impossible for these two tasks to run at the same time. What you can do about this? You can run task A. This increases the latency of task B, and it is no longer equal to interrupt latency, but is equal to (interrupt latency + time to do the minimum processing for task A). Or, you can run task B, then the latency of task A will not be zero any longer, but will be equal to the time necessary to do some minimum processing for task B. If neither of these meets your timing requirements, nothing you can do. This is
interference caused by multitasking. A can be served alone, B can be served alone, but A and B together cannot.
The only solution is to run tasks A and B on two different physical devices. Say, you get two PIC16s, one is doing task A, the other is doing task B. Then they somehow communicate when neither task A or task B is running. For 2 tasks we need 2 CPUs. For 8 tasks we need 8 CPUs.
What XMOS does? If I understand correctly, it has many CPU (cores) which can be dynamically assigned to the tasks as necessary. How many cores do we need to make sure 2 tasks never interfere with each other in a way I explained before. Two. How many cores do we need for 8 tasks? Eight. Well, we could probably get away with 6, but if there's any probability that all 8 events happen all at the same time, then 6 wouldn't be robust, would it? In short, to handle N tasks we need N CPUs. This is the same number of CPUs as if we had one CPU dedicated to one task. This is practically the same as a system with N different MCUs on the board.
What most others do? They have peripheral modules which can handle common tasks, so that the time-critical portions of the tasks can be performed entirely by the peripheral modules without any CPU intervention. Say, you can have a PIC with 16 input capture modules. Each of the modules can capture edges with 10-20 ns resolution. Or you can have a number of PWM modules which will produce accurate signal with transitions timed at 10 ns resolution. And so on, and so on. As a result, a single PIC can perform way more tasks than a multi-core solution, and they totally do not interfere with each other in a sense that the time-critical actions are never postponed.
The question is, what to do if the real time requirements are so complex that they cannot be handled by peripherals? Very simple, put an extra MCU or 2 on the board for just that purpose. Not enough? Go to FPGA.