I suspect that a language-based filter assignment, implemented in the runtime, would yield more efficient and robust microcontroller firmware.
Similar to Erlang's pattern matching?
No, not really. I used 'suspect', because I don't have a clear picture of exactly what would work for me.
For example, we could map hardware interrupts and such to event queues using
'Map' [ context-object '.' ] event-name 'to' [ queue-object '.' ] event-invocation(parameter-list) [ 'using' context-object ] ';'One possible idea is to support explicit filters to a
queue-object, for example
'Filter' queue-object [ ':' [ 'Allow' | 'Drop' | 'Postpone' ] 'if' filter-expression ]* ';'that are implicitly evaluated whenever the
queue-object is accessed for events (by the runtime).
Another is to support named 'blocks', like electrichickens use those multi-locks when shutting down systems they're working on, for example
'Block' queue-object 'with' block-identifier [ 'until' auto-unblock-condition ] ';' 'Unblock' queue-object 'with' block-identifier ';'where the queue is blocked if it has one or more blocks placed on it.
The former way is more powerful, but also has more overhead (since the
filter-expression is executed potentially for each queued event during each queue state check). The latter is simpler, potentially requiring just one bit per block per event queue.
Additional approaches are explicit filter expressions for forwarding events to other queues, and so on.
I just don't know enough to say what would work best for myself, yet.
Like I said, the exploration of this is something I'd love to contribute to, but is too much work for a single person to achieve.
I also need quite a bit of pushback whenever I don't see the downsides of my own suggestions or my own errors yet; that too happens too often.
Multiple levels of FIFO may be desirable. For example, in a telecoms server, there will be a single FIFO for all incoming events. There will also be a single FIFO associated with each call in progress, only containing events relevant to that call. Transferring an event from the "incoming" FIFO to one of the "call FIFOs" is done when convenient.
That is exactly the sort of pattern I'm thinking more than one "event queue" object would be useful.
Dealing with individual event priorities leads to all sorts of complex and chaotic situations (like priority inversion).
Dealing with multiple queues, in the runtime (so that one can obtain events from a set of queues, ordered by queue priority for example, with optional filters applied per queue), seems much more reasonable level of complexity to myself.
Then, the logical equivalent of a semaphore is an event queue, with sem_wait equating to grab object from queue, and sem_post to putting an object to the queue; the equivalent of a mutex is a single-event queue where mutex_lock equates to grabbing the token object from the queue, and mutex_unlock to putting the token object back to the queue, with waiters blocking on the grab-token-object operation.
Trick is, for this to be useful on microcontrollers, the operations must compile to efficient machine code.
There's no fundamental reason why it would be any less efficient that other mechanisms that also take account of atomicity volatility and parallelism.
No, but it is enticing to anyone developing a new language to think of an abstraction they love, that turns out to be hellishly complicated to implement on currently available hardware, requiring lots of RAM and complex operations like stack unwinding.
The trick is to consider the logical equivalents as having approximately the same level of abstraction and complexity. So, if you think of a way of implementing an event queue that requires the equivalents of mutexes and condition variables to implement, it is probably not suitable for real life implementation.
Indeed, in systems programming, I mostly use lockless techniques using atomic ops for these (GCC/Clang/ICC
atomic built-ins on x86-64 in particular), so I know it is/should be possible on most architectures.
On some, like AVR, you might need to disable interrupts for a few cycles (less than a dozen per critical section), but it should be doable.
But I'll modify that to include concepts like "as simple as possible but no simpler" and "simple programs that obviously have no defects vs complex programs that have no obvious defects" and "visibility of deadlock/livelock properties".
Very true.
The reason I don't mind having "lots" of reserved keywords, is that explicit expressions that make static analysis of such things (like which blocks may be placed on which event queues) easier, is more desirable and important than the problem of having to replace names in user code to avoid conflicts.
Other features, like "arrays" (memory ranges) instead of pointers (memory points/singular addresses), if constructed so that static analysis can verify if all accesses are within the ranges, can fix the fundamental memory safety issues we have with most C code right now. But these, too, rely on ensuring static and compile-time analysis is well supported by the language features and definitions.
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device. On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete. We could get more out of the same hardware, if that wait time could be used for something more useful. To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.
If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.
Sure, but there is no actual technical or logical requirement for them. Even in C, one can implement a write as
int async_write(int fd, const void *buf, size_t len, void *ctx, int (*completed)(int fd, const void *buf, size_t len, void *ctx, int status), int (*failure)(int fd, const void *buf, size_t len, void *ctx, int status));where the call returns immediately, but the
buf is read sometime afterwards, and must stay unmodified, until one of the two callbacks is called.
This is the difference between ones write operation being
event-oriented or
imperative –– although others use
synchronous vs.
asynchronous, and other terms... (Which is why being hung up on specific terms, like 'event-oriented' vs. 'event-based' is simply utter bullshit: human languages are vague, so as long as we agree to what we mean by each term, the terms themselves don't matter, only the concepts the terms represent matter. And as long as we convey the concepts to each other, all is good.)
Let me reiterate: I am personally not proposing anything new at the machine code level. Everything I've described has already been done in various languages, and quite often in C.
What I am trying to achieve by describing how to discover what a true low-level event-based microcontroller language could be, is to discuss how to find a better way (than current C/C++) to express these patterns –– and hopefully avoid the deficiencies C has (like memory safety, difficulty of static analysis, no standard function/variable attribute declaration mechanism, and so on), arriving at a
better programming language for microcontrollers and similar stuff, where the program or firmware is built on top of the concept of events.