Message passing schemes is one of my "favorite" topics in programming lately, and it could be applied to passing and handling events.
Message passing isn't well taught in traditional undergrad computing courses, for several reasons...
It isn't a computer science topic, cf lambda calculus, compilers, a standard algorithm.
It comes from hardware engineering.
Mainstream languages don't have it as a feature; it is relegated to libraries.
I've used MPI a lot in both C and Fortran, but my use cases are HPC and distributed computing, and the messages contain shared simulation data.
I'm also familiar with message mailbox techniques used in e.g. Linux kernel drivers, but haven't really explored microkernels using message passing.
Simply put, my own experience in message passing is too one-sided –– distributed data as messages –– to know much about how one could treat events as messages at the low (machine code) level in an efficient manner. I am sure it can be done, I just don't know exactly how.
(In MPI, each message is identified by a tuple (sender, tag), and messages with different identifying tuples can be received in any order. This makes asynchronous/nonblocking message passing (as in MPI_Isend() and MPI_Irecv()) especially powerful. The per-process runtime in OpenMPI is a helper thread, which coordinates the messaging between processes, and is surprisingly lightweight.)
Simple events like timeouts and interrupts do seem quite straightforward, but when we look at cases where the message involves significant amounts of data –– say, a DMA completion interrupt –– we'll likely find we need to also consider
zero-copy techniques; not so much for speed or efficiency, but because RAM and buffer space is quite limited in microcontrollers.
One possibility for handling zero-copy buffer events is to have "acknowledged events": When the buffer is full, the related event is dispatched. The buffer is "owned" by the event handler, until it acknowledges the buffer-full event by a corresponding buffer-now-free event. Obviously, there is nothing special in such acknowledgements, it is just a programming paradigm. We just know from existing programming languages that the approach must be consistent and useful, or we'll end up with footguns like gets() in C. This is also why I believe development through real-world examples/problems is the way to go.
(I am fully aware that I just described a completely asynchronous/non-blocking I/O or buffer-passing mechanism. This was deliberate, as we really do need this to make better MCU firmware. Less buffer-bloat, and so on.)
If I were to try and develop such a programming language, I would start by creating a simple IP stack and an example application, perhaps a HTTP server displaying a static page (with GET and HEAD support), in this new language. The language, the IP stack implementation, and the low-level machine code generated (roughly – no need to have an actual compiler, just the rough intent suffices, perhaps call ABI on a specific architecture), would all be developed in parallel. One could examine its readability and maintainability by giving the language spec and a snippet of code with a bug in it (something the compiler would not complain about; a
thinko rather than a typo) to a suitable
victim test person, describing the effect of the bug, and asking them to find the bug. If the person finds the bug without having to consult the language spec beyond the introduction, we'd be on the right track.
Things like keywords or reserved words, operators, exact syntex, and so on, are secondary, because the point is to develop a language to effectively express the concepts. You could even have antagonistic teams, finding ways of obfuscating code and hiding bugs in it, to arrive at the most robust syntax.
That the large organizations like Google do not seem to grok this at all, and instead focus on churning out yet another imperative object-oriented language after another, really depresses me. So, perhaps I'm a bit hard on OP for going down the exact same unfruitful road, but I definitely have a reason to here, it's not just "my opinion" here. The entire
method is at fault, and a completely different design approach should be taken.