Author Topic: event-oriented programming language  (Read 7074 times)

0 Members and 1 Guest are viewing this topic.

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20768
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: event-oriented programming language
« Reply #25 on: January 05, 2023, 10:27:11 am »
https://en.wikipedia.org/wiki/Event-driven_programming

Quote
Criticism
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
Bold added by me.

State explosion can be limited using hierarchical state machines. They are somewhat more complex to implement, but way more flexible, than the traditional flat, table-driven state machines.

They aren't that much more complex to implement, and if the FSM is complex then it is a good tradeoff. I've used the state behaviour=class, event=method, current state = singleton instance of class, to very good effect.

It is easy to add logging with trivial performance impact in a production system, which was invaluable during commissioning and in (correctly) deflecting blame onto the other company's products. Great for avoiding lawyers :)

Ditto adding performance measurements.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: event-oriented programming language
« Reply #26 on: January 05, 2023, 10:46:50 am »
https://en.wikipedia.org/wiki/Event-driven_programming

Quote
Criticism
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
Bold added by me.

State explosion can be limited using hierarchical state machines. They are somewhat more complex to implement, but way more flexible, than the traditional flat, table-driven state machines.

They aren't that much more complex to implement, and if the FSM is complex then it is a good tradeoff. I've used the state behaviour=class, event=method, current state = singleton instance of class, to very good effect.

It is easy to add logging with trivial performance impact in a production system, which was invaluable during commissioning and in (correctly) deflecting blame onto the other company's products. Great for avoiding lawyers :)

Ditto adding performance measurements.

Implementing the state transitions in hierarchical state machines is a bit more involved compared to the simple state machines, because the HSM needs to be able to support entry actions, initial state concept, exit handlers, and do that in a correct order so that the states are first exited up to the common parent, and then entered to the target state while performing any enter actions and checking initial states. Miro Samek's book "Practical UML Statecharts in C/C++, 2nd Ed Event-Driven Programming for Embedded Systems" has a good introduction and a reference implementation for all this.

I have created a set of C macros that are used to define and implement a simple DSL (domain specific language) for describing these HSMs in a compact way.

I have also included support for state timeout events, optional default timeouts for each state, which means that entering a state will start the state timer if the state has a default timeout time defined. Exiting the state will stop the state timer. The state timeout handler will be called automagically if the state timer expires.
« Last Edit: January 05, 2023, 11:00:07 am by Kalvin »
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6967
  • Country: fi
    • My home page and email address
Re: event-oriented programming language
« Reply #27 on: January 05, 2023, 11:41:04 am »
From state machines we can easily slide into the next important design decision when considering an event-oriented language: (standard) library interfaces.

If we consider typical microcontroller applications, a lot of basic I/O is handled at least partially by peripheral subsystems, without constant supervision from the actual processor.  In particular, consider things like UART and SPI/QSPI transfers, especially slow block I/O to something like a microSD card (very cheap, very large storage capacity, easy to interface to via SPI/QSPI).  Let's examine such a write operation.

In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device.  On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete.  We could get more out of the same hardware, if that wait time could be used for something more useful.  To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.

I've used this interface in MPI extensively in both Fortran and C (MPI_Isend, MPI_Irecv), and it has worked really, really well for me in various use cases.  Each ongoing I/O operation is associated with an MPI_Request object, which can be tested using MPI_Test* functions and waited for completion using MPI_Wait* functions.

However, I've also had big arguments about this with "MPI experts", who claim such interfaces are "inherently unsafe", just because they do not conceptually understand it to use it effectively.  So, conceptual clarity of how it works is absolutely crucial.

I consider such completion events a third type, perhaps 'pending': the event is known to occur some time in the future, but its payload (completion status, perhaps error) and exact time is unknown.

Instead of being able to handle all possible orders of events, postponing events that have already been queued, to after one or more such pending events have been received and handled, can make the code much simpler.  (I personally use it all the time in MPI, by documenting well the tags of pending communication events, and carefully designing the order in which such events/messages are read.)
Thus, an important question is how this postponing is expressed in the language.

Those used to implementing event and state machines in imperative languages will immediately gravitate towards "you use a loop around the event queue, so just add statements to requeue the event if it can't be handled yet", ending up with an imperative-oriented event handling loop.
I suspect that a language-based filter assignment, implemented in the runtime, would yield more efficient and robust microcontroller firmware.

This also relates to how the hardware-generated events, like interrupts, are mapped to handlers or event queues.
A language keyword or operator could be used to designate the hardware sources and the handler or event queue it is mapped to; this would also generate the necessary runtime code (interrupt handler assignment and trampoline or event-queueing), and allow things like "and include this object as context for the event".  (That way e.g. buttons could use the exact same event handling code, and just have a unique context object or event attribute per button.)

In Javascript and GUI toolkits like Qt and Gtk, events are essentially callbacks generated from basically any suitable object as the context.

I am leaning towards a different approach, one where there are one or more event queues, abstract instances, with the aforementioned dependencies and postponing defined in terms of which queue is "active" and which "paused".  The event queue itself is an abstraction; a first-level "object" in the language, without any limit as to what kind of events or which context those events have, used for the management of event order, priority, and interdependence.
It might be very useful to not associate events themselves with any priority, with each queue being strictly a FIFO, and only define priority between event queues. (This would significantly simplify the event queue/dequeue operations.)

Then, the logical equivalent of a semaphore is an event queue, with sem_wait equating to grab object from queue, and sem_post to putting an object to the queue; the equivalent of a mutex is a single-event queue where mutex_lock equates to grabbing the token object from the queue, and mutex_unlock to putting the token object back to the queue, with waiters blocking on the grab-token-object operation.
Trick is, for this to be useful on microcontrollers, the operations must compile to efficient machine code.

(Just because the abstraction sounds nice, does not mean it is useful in practice.  It must both be understandable to us human programmers, but also compile to effective and efficient machine code.  Abstractions that fail one of them have no room in microcontroller and limited-resources embedded development!)
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20768
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: event-oriented programming language
« Reply #28 on: January 05, 2023, 11:54:54 am »
https://en.wikipedia.org/wiki/Event-driven_programming

Quote
Criticism
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
Bold added by me.

State explosion can be limited using hierarchical state machines. They are somewhat more complex to implement, but way more flexible, than the traditional flat, table-driven state machines.

They aren't that much more complex to implement, and if the FSM is complex then it is a good tradeoff. I've used the state behaviour=class, event=method, current state = singleton instance of class, to very good effect.

It is easy to add logging with trivial performance impact in a production system, which was invaluable during commissioning and in (correctly) deflecting blame onto the other company's products. Great for avoiding lawyers :)

Ditto adding performance measurements.

Implementing the state transitions in hierarchical state machines is a bit more involved compared to the simple state machines, because the HSM needs to be able to support entry actions, initial state concept, exit handlers, and do that in a correct order so that the states are first exited up to the common parent, and then entered to the target state while performing any enter actions and checking initial states.

That's only beneficial if you are attempting to implement one type of FSM specification: a Harel State Chart (i.e. the UML state machine diagram). You don't need it if you are implementing the conceptually simpler FSM patterns where an event only invokes an action that depends on the current state. That's equivalent to table-driven FSMs and if/then/ele/case patterns.

I haven't yet needed the full Harel/UML form, although it can have benefits in some circumstances.

Quote
Miro Samek's book "Practical UML Statecharts in C/C++, 2nd Ed Event-Driven Programming for Embedded Systems" has a good introduction and a reference implementation for all this.

Great minds think alike, although I prefer the GoF Design Patterns book for its brevity, and being language agnositc.

Quote
I have created a set of C macros that are used to define and implement a simple DSL (domain specific language) for describing these HSMs in a compact way.

I hate macros (that were beneficial in the 1970s), since they are a form of Design Specific Language that cripples IDE and other tooling, and requires special training in non-transferrable skills.

Quote
I have also included support for state timeout events, optional default timeouts for each state, which means that entering a state will start the state timer if the state has a default timeout time defined. Exiting the state will stop the state timer. The state timeout handler will be called automagically if the state timer expires.

Yup.

Others are easy and possible, especially keeping a consise history of the state/event trajectory useful when understanding "strange behaviour".
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20768
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: event-oriented programming language
« Reply #29 on: January 05, 2023, 12:12:06 pm »
...
I suspect that a language-based filter assignment, implemented in the runtime, would yield more efficient and robust microcontroller firmware.

Similar to Erlang's pattern matching?

Quote
...
I am leaning towards a different approach, one where there are one or more event queues, abstract instances, with the aforementioned dependencies and postponing defined in terms of which queue is "active" and which "paused".  The event queue itself is an abstraction; a first-level "object" in the language, without any limit as to what kind of events or which context those events have, used for the management of event order, priority, and interdependence.
It might be very useful to not associate events themselves with any priority, with each queue being strictly a FIFO, and only define priority between event queues. (This would significantly simplify the event queue/dequeue operations.)

Good choices :)

Anytime priority is introduced for normal operations, sooner or later people will want to fiddle with priorities to avoid rare emergent problems. Such fiddling might avoid that problem materialising, but will introduce others where they didn't exist before.

Design principle:
  • if the order in which events is processed is important, then all the events must be in a single FIFO
  • if the order in which events is processed is not important, then all the events may be in a single FIFO

Multiple levels of FIFO may be desirable. For example, in a telecoms server, there will be a single FIFO for all incoming events. There will also be a single FIFO associated with each call in progress,  only containing events relevant to that call. Transferring an event from the "incoming" FIFO to one of the "call FIFOs" is done when convenient.

Quote
Then, the logical equivalent of a semaphore is an event queue, with sem_wait equating to grab object from queue, and sem_post to putting an object to the queue; the equivalent of a mutex is a single-event queue where mutex_lock equates to grabbing the token object from the queue, and mutex_unlock to putting the token object back to the queue, with waiters blocking on the grab-token-object operation.
Trick is, for this to be useful on microcontrollers, the operations must compile to efficient machine code.

There's no fundamental reason why it would be any less efficient that other mechanisms that also take account of atomicity volatility and parallelism.

Quote
(Just because the abstraction sounds nice, does not mean it is useful in practice.  It must both be understandable to us human programmers, but also compile to effective and efficient machine code.  Abstractions that fail one of them have no room in microcontroller and limited-resources embedded development!)

Yup.

But I'll modify that to include concepts like "as simple as possible but no simpler" and "simple programs that obviously have no defects vs complex programs that have no obvious defects" and "visibility of deadlock/livelock properties".
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: DiTBho

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: event-oriented programming language
« Reply #30 on: January 05, 2023, 12:16:38 pm »
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device.  On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete.  We could get more out of the same hardware, if that wait time could be used for something more useful.  To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.

If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.

Using co-operative scheduler and event-driven techniques will typically prevent this. Using patterns like Observer, and Subscribe/Publish are nice techniques to trigger processing only when something needs to be done. Combining these techniques with state machines and event queues, you can create  systems that are responsive. Also, the devices will be quite deterministic in a sense that nothing is happening if there are no events generated, and the device is active only when processing events.

For time-critical systems one can use time-triggered scheduling, for example. This kind of scheduling will produce even more deterministic systems, if needed.
 
The following users thanked this post: DiTBho

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6967
  • Country: fi
    • My home page and email address
Re: event-oriented programming language
« Reply #31 on: January 05, 2023, 01:11:52 pm »
I suspect that a language-based filter assignment, implemented in the runtime, would yield more efficient and robust microcontroller firmware.
Similar to Erlang's pattern matching?
No, not really.  I used 'suspect', because I don't have a clear picture of exactly what would work for me.

For example, we could map hardware interrupts and such to event queues using
    'Map' [ context-object '.' ] event-name 'to' [ queue-object '.' ] event-invocation(parameter-list) [ 'using' context-object ] ';'

One possible idea is to support explicit filters to a queue-object, for example
    'Filter' queue-object [ ':' [ 'Allow' | 'Drop' | 'Postpone' ] 'if' filter-expression ]* ';'
that are implicitly evaluated whenever the queue-object is accessed for events (by the runtime).

Another is to support named 'blocks', like electrichickens use those multi-locks when shutting down systems they're working on, for example
    'Block' queue-object 'with' block-identifier [ 'until' auto-unblock-condition ] ';'
    'Unblock' queue-object 'with' block-identifier ';'
where the queue is blocked if it has one or more blocks placed on it.

The former way is more powerful, but also has more overhead (since the filter-expression is executed potentially for each queued event during each queue state check).  The latter is simpler, potentially requiring just one bit per block per event queue.

Additional approaches are explicit filter expressions for forwarding events to other queues, and so on.
I just don't know enough to say what would work best for myself, yet.
Like I said, the exploration of this is something I'd love to contribute to, but is too much work for a single person to achieve.
I also need quite a bit of pushback whenever I don't see the downsides of my own suggestions or my own errors yet; that too happens too often.  :P

Multiple levels of FIFO may be desirable. For example, in a telecoms server, there will be a single FIFO for all incoming events. There will also be a single FIFO associated with each call in progress,  only containing events relevant to that call. Transferring an event from the "incoming" FIFO to one of the "call FIFOs" is done when convenient.
That is exactly the sort of pattern I'm thinking more than one "event queue" object would be useful.

Dealing with individual event priorities leads to all sorts of complex and chaotic situations (like priority inversion).
Dealing with multiple queues, in the runtime (so that one can obtain events from a set of queues, ordered by queue priority for example, with optional filters applied per queue), seems much more reasonable level of complexity to myself.

Quote
Then, the logical equivalent of a semaphore is an event queue, with sem_wait equating to grab object from queue, and sem_post to putting an object to the queue; the equivalent of a mutex is a single-event queue where mutex_lock equates to grabbing the token object from the queue, and mutex_unlock to putting the token object back to the queue, with waiters blocking on the grab-token-object operation.
Trick is, for this to be useful on microcontrollers, the operations must compile to efficient machine code.
There's no fundamental reason why it would be any less efficient that other mechanisms that also take account of atomicity volatility and parallelism.
No, but it is enticing to anyone developing a new language to think of an abstraction they love, that turns out to be hellishly complicated to implement on currently available hardware, requiring lots of RAM and complex operations like stack unwinding.

The trick is to consider the logical equivalents as having approximately the same level of abstraction and complexity.  So, if you think of a way of implementing an event queue that requires the equivalents of mutexes and condition variables to implement, it is probably not suitable for real life implementation.
Indeed, in systems programming, I mostly use lockless techniques using atomic ops for these (GCC/Clang/ICC atomic built-ins on x86-64 in particular), so I know it is/should be possible on most architectures.
On some, like AVR, you might need to disable interrupts for a few cycles (less than a dozen per critical section), but it should be doable.

But I'll modify that to include concepts like "as simple as possible but no simpler" and "simple programs that obviously have no defects vs complex programs that have no obvious defects" and "visibility of deadlock/livelock properties".
Very true.

The reason I don't mind having "lots" of reserved keywords, is that explicit expressions that make static analysis of such things (like which blocks may be placed on which event queues) easier, is more desirable and important than the problem of having to replace names in user code to avoid conflicts.

Other features, like "arrays" (memory ranges) instead of pointers (memory points/singular addresses), if constructed so that static analysis can verify if all accesses are within the ranges, can fix the fundamental memory safety issues we have with most C code right now.  But these, too, rely on ensuring static and compile-time analysis is well supported by the language features and definitions.

In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device.  On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete.  We could get more out of the same hardware, if that wait time could be used for something more useful.  To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.

If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.
Sure, but there is no actual technical or logical requirement for them.  Even in C, one can implement a write as
    int async_write(int fd, const void *buf, size_t len, void *ctx,
                    int (*completed)(int fd, const void *buf, size_t len, void *ctx, int status),
                    int (*failure)(int fd, const void *buf, size_t len, void *ctx, int status));
where the call returns immediately, but the buf is read sometime afterwards, and must stay unmodified, until one of the two callbacks is called.

This is the difference between ones write operation being event-oriented or imperative –– although others use synchronous vs. asynchronous, and other terms...  (Which is why being hung up on specific terms, like 'event-oriented' vs. 'event-based' is simply utter bullshit: human languages are vague, so as long as we agree to what we mean by each term, the terms themselves don't matter, only the concepts the terms represent matter.  And as long as we convey the concepts to each other, all is good.)

Let me reiterate: I am personally not proposing anything new at the machine code level.  Everything I've described has already been done in various languages, and quite often in C.

What I am trying to achieve by describing how to discover what a true low-level event-based microcontroller language could be, is to discuss how to find a better way (than current C/C++) to express these patterns –– and hopefully avoid the deficiencies C has (like memory safety, difficulty of static analysis, no standard function/variable attribute declaration mechanism, and so on), arriving at a better programming language for microcontrollers and similar stuff, where the program or firmware is built on top of the concept of events.
 
The following users thanked this post: DiTBho

Offline DiTBhoTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: event-oriented programming language
« Reply #32 on: January 05, 2023, 01:17:53 pm »
I have created a set of C macros that are used to define and implement a simple DSL (domain specific language) for describing these HSMs in a compact way.

very interesting! Can you show some examples of them?
C-macros always smell of possibly language add-on-s


(here, OMG, in my-c there is no cpp and #define is banned
so I am are really forced to add real language constructs

DMS for FSMs: sounds good!!!)
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline DiTBhoTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: event-oriented programming language
« Reply #33 on: January 05, 2023, 01:19:01 pm »
I have also included support for state timeout events, optional default timeouts for each state, which means that entering a state will start the state timer if the state has a default timeout time defined. Exiting the state will stop the state timer. The state timeout handler will be called automagically if the state timer expires.

WOW!!! This sounds awesome for my XPC860 board, PowerPC 32bit core, nothing different from a PPC603 with 32-bit general-purpose registers (GPRs), but (great news!!!) with Quad Integrated Communications Controller.

It's called "PowerQUIC", and it's very a versatile one-chip integrated microprocessor and peripheral  combination solution, designed for a variety of controller applications, but profiled as Networking Communication oriented microprocessor.

In short, it's a classic 90s PowerPC with 32 GPR (general purpose registers, 32bit), added with a RISC communications processor (aka CPM), which makes it full of fun because it's stuffed of modules :o :o :o

The CPM is a weird dedicated RISC-ish core, which has been enhanced by the addition of the  inter-integrated  controller (SPI and I2C)  channel and a real-time clock, support for continuous mode transmission and reception on all - 16 - serial DMA channels, with up to 8Kbytes of dual-port RAM buffer, as well as up to 2 (or even 4!!!) Fast Ethernet controllers, fully compliant with the IEEE 802.3u Standard (except when you wanna use the UTOPIA module in ATM mode, which I frankly will ignore), and and other stuff like HDLC/SDLC channels.

The  memory  controller  has  also been  enhanced, enabling  the  MPC860  to  support  *any*  type  of  memory,  including  high-performance memories  and  new  types  of  DRAMs. 

All of these pieces of hardware have time-out, queues, produce and consume events.

As my friend Nominal Animal said above, you'd better experiment to find out what fits your needs and tastes best. Perhaps I am wrong, but I think this chip is one of the best to experience event oriented programming :D
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Sherlock Holmes

  • Frequent Contributor
  • **
  • !
  • Posts: 570
  • Country: us
Re: event-oriented programming language
« Reply #34 on: January 05, 2023, 02:17:05 pm »
There are some interesting points being made here, something that I find extremely interesting is the fact that there are at least two models of computability. One being a Turing machine and the other being Lambda Calculus.

These are as different as chalk and cheese yet have been shown to be logically equivalent in that they can each describe computation, there's no computable problem that one can solve that the other cannot.

Lambda calculus however does not involve state, or loops, or mutability it offers powerful benefits (as seen in functional languages) but seems ill suited to MCU applications.
“When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.” ~ Arthur Conan Doyle, The Case-Book of Sherlock Holmes
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6967
  • Country: fi
    • My home page and email address
Re: event-oriented programming language
« Reply #35 on: January 05, 2023, 03:33:29 pm »
Perhaps I am wrong, but I think [XPC860] is one of the best to experience event oriented programming :D
The one downside I can see is that you'll need to find an old/NOS/used board, since XPC860 (and similar ones like NXP MPC860) are no longer available at larger sellers like Mouser and Digikey.  Otherwise, they definitely look well suited.

(I do wonder, if their (assumed!) lack of success is related to there not really being a language where one could easily and effortlessly express the patterns this kind of hardware is well suited for?  After all, human history is full of inventions where the technically lesser implementation has won due to human-related reasons.  Popularity does not correlate strongly with quality or price, even though some humans fervently believe so.)

To get a pretty good conceptual grip on the benefits and downsides, I believe client-side HTML+Javascript –– so zero tooling needed, only a plain text editor and a relatively recent browser, on any OS or architecture –– is a viable choice.  Bad choices, like long calculation done directly within the event handler, exhibit the same problems as they tend to do in hardware, too: the UI events are queued, so nothing seems to work, until everything gets registered at once; current browsers will even interrupt such code and ask the user if they want to stop the "hung script"!
If you use WebSockets or HTTP/HTTPS queries, they're fully asynchronous (so that the call only initiates the I/O or request, with registered callbacks called when the I/O or request completes).
Thus, to make effective client-side HTML+Javascript stuff, you must understand and apply the paradigm/approach: just translating a C/Python/VB application to JS just will not work in a browser (because the browser environment is inherently event-oriented).

There are two use cases where client-side HTML+Javascript is particularly useful in my opinion:
1. Simulating embedded user interfaces, especially menus
2. Simple tool pages, like my FIR analysis page (put the coefficients, like 0.2 0.4 0.6 0.8 1.0 0.8 0.6 0.4 0.2 to the top (right) input box, and press Enter, and it'll show the FIR frequency response), or the window function spectral response (put 0.2 0.4 0.6 0.8 1.0 0.8 0.6 0.4 0.2 0.0 in the upper red box, and 0.2 0.4 0.6 0.8 1.0 1.0 0.8 0.6 0.4 0.2 in the upper blue box, and click Recalculate, to see the difference in spectral response between "odd" and "even" triangular window functions)

The former is useful because that way the most user-visible part (interface) gets tested and simulated and worked out first, instead of implemented ad-hoc when the functionality is done.
The latter is useful because it is truly portable, and browsers' Javascript engines are nowadays ridiculously well optimized: even naïve code using lots of Math.sin() and Math.cos() like my examples above run at practically native code speed.
The examples above are standalone pages, single HTML files that contain the Javascript code, and do not require any access outside that file, not even Internet access.  One only needs a server, if one wants to "save" data to or "load" data from external files.  (A script on the server takes the POST data, and reformats it as the desired MIME type file.  For file upload, it takes the POST data containing the uploaded files, parses the data, and inserts the parsed data to the same HTML file (typically as a Javascript array).

So, I agree if we're talking about hardware to do real development and experiments and learning on.  I do claim HTML+Javascript is a better introduction to event-oriented programming in general, however!  ;)
(For no other real reason than that it requires no other investment except human time and effort: we all have the tools necessary already installed.)
 
The following users thanked this post: DiTBho

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20768
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: event-oriented programming language
« Reply #36 on: January 05, 2023, 03:46:00 pm »
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device.  On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete.  We could get more out of the same hardware, if that wait time could be used for something more useful.  To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.

If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.

It needn't be inefficient. All you need is one processor/core per event loop. Cores are cheap nowadays :)

Quote
Using co-operative scheduler and event-driven techniques will typically prevent this. Using patterns like Observer, and Subscribe/Publish are nice techniques to trigger processing only when something needs to be done. Combining these techniques with state machines and event queues, you can create  systems that are responsive. Also, the devices will be quite deterministic in a sense that nothing is happening if there are no events generated, and the device is active only when processing events.

RTOSs are merely a hack to multiplex several processes (i.e. event loops) onto a single execution engine. They are a useful hack when insufficient execution engines are available.

Quote
For time-critical systems one can use time-triggered scheduling, for example. This kind of scheduling will produce even more deterministic systems, if needed.


There is absolutely nothing special about timeouts: they are merely an event equivalent to a message arriving, an input being available or an output completing. All such events should be treated identically at the language level and the runtime level.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20768
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: event-oriented programming language
« Reply #37 on: January 05, 2023, 04:04:34 pm »
I suspect that a language-based filter assignment, implemented in the runtime, would yield more efficient and robust microcontroller firmware.
Similar to Erlang's pattern matching?
No, not really.  I used 'suspect', because I don't have a clear picture of exactly what would work for me.

For example, we could map hardware interrupts and such to event queues using
    'Map' [ context-object '.' ] event-name 'to' [ queue-object '.' ] event-invocation(parameter-list) [ 'using' context-object ] ';'

One possible idea is to support explicit filters to a queue-object, for example
    'Filter' queue-object [ ':' [ 'Allow' | 'Drop' | 'Postpone' ] 'if' filter-expression ]* ';'
that are implicitly evaluated whenever the queue-object is accessed for events (by the runtime).

Another is to support named 'blocks', like electrichickens use those multi-locks when shutting down systems they're working on, for example
    'Block' queue-object 'with' block-identifier [ 'until' auto-unblock-condition ] ';'
    'Unblock' queue-object 'with' block-identifier ';'
where the queue is blocked if it has one or more blocks placed on it.

The former way is more powerful, but also has more overhead (since the filter-expression is executed potentially for each queued event during each queue state check).  The latter is simpler, potentially requiring just one bit per block per event queue.

I have found that a FIFO with two get/put variants to be sufficient for my purposes. A put which either blocks until success or if the FIFO is full it immediately returns control to the calling process. Similarly a get which either blocks until the FIFO isn't empty, or if the FIFO is empty it immediately returns control to the calling process.

In almost all cases the blocking variant is sufficient. If it isn't sufficient then it usually means the system is under-provisioned.

Quote
Additional approaches are explicit filter expressions for forwarding events to other queues, and so on.
I just don't know enough to say what would work best for myself, yet.
Like I said, the exploration of this is something I'd love to contribute to, but is too much work for a single person to achieve.
I also need quite a bit of pushback whenever I don't see the downsides of my own suggestions or my own errors yet; that too happens too often.  :P

I seek out people where I listen carefully to what they say - especially when they disagree with me :)

Quote
Multiple levels of FIFO may be desirable. For example, in a telecoms server, there will be a single FIFO for all incoming events. There will also be a single FIFO associated with each call in progress,  only containing events relevant to that call. Transferring an event from the "incoming" FIFO to one of the "call FIFOs" is done when convenient.
That is exactly the sort of pattern I'm thinking more than one "event queue" object would be useful.

Dealing with individual event priorities leads to all sorts of complex and chaotic situations (like priority inversion).
Dealing with multiple queues, in the runtime (so that one can obtain events from a set of queues, ordered by queue priority for example, with optional filters applied per queue), seems much more reasonable level of complexity to myself.

It isn't clear to me whether it is better to have the filtering/matching as part of the runtime/language, or as part of your process. My gut feel is that being part of the runtime/language is best in a limited number of very important cases, e.g. high availability and hot-swapping applications. Otherwise attempting to use it for application specific filtering is likely to be a bad fit.

Quote
Quote
Then, the logical equivalent of a semaphore is an event queue, with sem_wait equating to grab object from queue, and sem_post to putting an object to the queue; the equivalent of a mutex is a single-event queue where mutex_lock equates to grabbing the token object from the queue, and mutex_unlock to putting the token object back to the queue, with waiters blocking on the grab-token-object operation.
Trick is, for this to be useful on microcontrollers, the operations must compile to efficient machine code.
There's no fundamental reason why it would be any less efficient that other mechanisms that also take account of atomicity volatility and parallelism.
No, but it is enticing to anyone developing a new language to think of an abstraction they love, that turns out to be hellishly complicated to implement on currently available hardware, requiring lots of RAM and complex operations like stack unwinding.

I have difficulty distinguishing between hardware and software. Those that think it is easy have major gaps in understanding not only the theoretical fundamentals but also what's implemented in real systems.

Quote
The trick is to consider the logical equivalents as having approximately the same level of abstraction and complexity.  So, if you think of a way of implementing an event queue that requires the equivalents of mutexes and condition variables to implement, it is probably not suitable for real life implementation.
Indeed, in systems programming, I mostly use lockless techniques using atomic ops for these (GCC/Clang/ICC atomic built-ins on x86-64 in particular), so I know it is/should be possible on most architectures.
On some, like AVR, you might need to disable interrupts for a few cycles (less than a dozen per critical section), but it should be doable.

I'd like to see an analysis of which mechanisms are fundamentally necessary and sufficient, and the implementation details that have lead to other mechanisms being desirable.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: event-oriented programming language
« Reply #38 on: January 05, 2023, 04:05:53 pm »
I have created a set of C macros that are used to define and implement a simple DSL (domain specific language) for describing these HSMs in a compact way.

very interesting! Can you show some examples of them?
C-macros always smell of possibly language add-on-s


(here, OMG, in my-c there is no cpp and #define is banned
so I am are really forced to add real language constructs

DMS for FSMs: sounds good!!!)

Here is a small snippet implementing some networking stuff from my code using HSM.

At the top of the source code file there are forward declarations for the state machine instance yns_sm, and its states.
The top level state is yns_sm_top_state, and its child states are listed below.

Code: [Select]
... <snip>
YHSM_DECLARE(yns_sm);
YHSM_STATE_DECLARE(yns_sm_top_state);

YHSM_STATE_DECLARE(yns_network_closed_state);
YHSM_STATE_DECLARE(yns_network_error_state);
YHSM_STATE_DECLARE(yns_network_fail_state);

YHSM_STATE_DECLARE(yns_network_opened_state);
YHSM_STATE_DECLARE(yns_network_reconnect_retry_check_state);
YHSM_STATE_DECLARE(yns_network_disconnected_state);
YHSM_STATE_DECLARE(yns_network_connected_state);

YHSM_STATE_DECLARE(yns_server_connected_state);
<snip> ...

Here is the implementation of the yns_server_connected_state.

The state has a default timeout value of 2000 milliseconds, and the state declares a local variable for counting the number of remaining retries.

Code: [Select]
#define YNS_SERVER_CONNECTED_STATE_TIMEOUT_ms 2000

static int yns_server_poll_retry_count;

Here we define a new state yns_server_connected_state, and declare its parent state to be yns_network_connected_state, and set the default state timeout time to be 2000 milliseconds.

Code: [Select]
YHSM_STATE_BEGIN(yns_server_connected_state, &yns_network_connected_state, YNS_SERVER_CONNECTED_STATE_TIMEOUT_ms);

When the state machine enters this state, and if state's default timeout value is larger than 0, state's timeout timer will be started automatically by the default timeout value. When the state machine makes a transition exiting the state, this state's timeout timer will be stopped automatically. If the state's timeout timer expires while the state machine is still in this state, an event handler YHSM_STATE_TIMEOUT_ACTION() will be called.

Here is state's enter action which will be executed whenever a transition into this particular state takes place:

Code: [Select]
YHSM_STATE_ENTER_ACTION(yns_server_connected_state, hsm)
{
    yns_server_socket_open();
    yns_led_enable();

    // Notify the application observer that the connection to the server is established.
    yns_notify(YNETWORK_EVENT_SERVER_CONNECTED);

    yns_server_reconnect_retry_count =
        yns_network_config.server_reconnect_retry_count;

    yns_server_poll_retry_count = yns_network_config.server_poll_retry_count;
}

After the state's enter action is executed, the state machine will transition to the given initial state yns_network_idle_state.
By definition the yns_network_idle_state shall be a sub-state of the yns_server_connected_state.

Code: [Select]
YHSM_STATE_INIT_ACTION(yns_server_connected_state, hsm)
{
    YHSM_INIT(&yns_network_idle_state);
}

Macro YHSM_INIT() performs the actual initial state transition. Be definition sub-state's enter action will be executed during this initial transition.

Here is the state's timeout action, which will be executed after 2000 milliseconds if the state is still active after two seconds:

Code: [Select]
YHSM_STATE_TIMEOUT_ACTION(yns_server_connected_state, hsm)
{
        YHSM_TRAN(&yns_server_reconnect_retry_check_state);
}

Macro YHSM_TRAN() is used to trigger a state transition.

Here is state's exit action which will be executed whenever a transition from this particular state takes place:

Code: [Select]
YHSM_STATE_EXIT_ACTION(yns_server_connected_state, hsm)
{
    // Notify the application observer that the connection to the server is disconnected.
    yns_notify(YNETWORK_EVENT_SERVER_DISCONNECTED);

    yns_server_socket_close();
    yns_led_disable();
}

Here is the state's actual event handler:

Code: [Select]
YHSM_STATE_EVENT_HANDLER(yns_server_connected_state, hsm, ev)
{
    if (ev->id == YNS_EVENT_SOCKET_ERROR || ev->id == YNS_EVENT_NETWORK_TIMEOUT)
    {
        YHSM_TRAN(&yns_server_reconnect_retry_check_state);
    }

    if (ev->id == YNS_EVENT_SOCKET_DO_CLOSE)
    {
        YHSM_TRAN(&yns_server_reconnect_delay_state);
    }

    YHSM_RAISE();
}

Macro YHSM_TRAN() is used to trigger a state transition.

If the state doesn't want to handle some events, it will delegate those unhandled events to its parent state using macro YHSM_RAISE().

Here is the end of the state's definition.

Code: [Select]
YHSM_STATE_END(yns_server_connected_state);

Note particularly how the socket will be always opened and LED will be turned on when entering the state, and the socket will always be closed and LED turned off when exiting state. This guarantees by definition that the setup- and cleanup-actions are always performed in correct order, every time a state transition takes place.
 

Offline artag

  • Super Contributor
  • ***
  • Posts: 1249
  • Country: gb
Re: event-oriented programming language
« Reply #39 on: January 05, 2023, 04:08:52 pm »
Why 'language' as opposed to 'runtime' or 'library' ?
It's easy to do event-driven programming with any language, so I don't see why you'd need a new one. What problem are you trying to solve ?
 

Offline DiTBhoTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: event-oriented programming language
« Reply #40 on: January 05, 2023, 04:28:09 pm »
I do wonder, if their (assumed!) lack of success is related to there not really being a language where one could easily and effortlessly express the patterns this kind of hardware is well suited for?

Well, umm. programming PowerPC is not that bad, you just have to care more (pipeline and cache) quirks than with MIPS32, but programming the PowerQUIC CPM engine with classic imperative patterns is ... as terrible as programming the TPU engine of the old CPU32, of which, the 683xx has been massively used by Ford Racing and it's still in production - I think - thanks to their internal language support, whereas the rest of people need to use TPU-assembly and C-tricks like #define Macro to mimic the description of FSMs.

Classic imperative patterns are not good, this is probably why no hobbyist wants anything to do with those chips (which are also hard to find, and expensive).

But hey? That hardware is really as weird as awesome, I mean a true good challenge  :D :D :D
« Last Edit: January 05, 2023, 04:34:45 pm by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 
The following users thanked this post: Nominal Animal

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: event-oriented programming language
« Reply #41 on: January 05, 2023, 04:32:49 pm »
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device.  On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete.  We could get more out of the same hardware, if that wait time could be used for something more useful.  To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.

If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.

It needn't be inefficient. All you need is one processor/core per event loop. Cores are cheap nowadays :)

You are so spoiled! I don't have a luxury of having multiple cores :) I work in a single core embedded environment, where the available amount of Flash and RAM are typically very limited (Cheap ARM M3 devices), the device's energy consumption has to be minimized, and battery life-time has to be maximized.

Quote
Quote
Using co-operative scheduler and event-driven techniques will typically prevent this. Using patterns like Observer, and Subscribe/Publish are nice techniques to trigger processing only when something needs to be done. Combining these techniques with state machines and event queues, you can create  systems that are responsive. Also, the devices will be quite deterministic in a sense that nothing is happening if there are no events generated, and the device is active only when processing events.

RTOSs are merely a hack to multiplex several processes (i.e. event loops) onto a single execution engine. They are a useful hack when insufficient execution engines are available.

Luxury items again! :) RTOSes need to allocate RAM for each task. As I have only very limited amount of RAM available, I prefer using / have to use a simple co-operative tasker/scheduler which requires only one global stack frame. On some special cases I may use a preemptive scheduler with two tasks: one task for the main application running a co-operative tasker, and the other task for the networking.

Quote
Quote
For time-critical systems one can use time-triggered scheduling, for example. This kind of scheduling will produce even more deterministic systems, if needed.


There is absolutely nothing special about timeouts: they are merely an event equivalent to a message arriving, an input being available or an output completing. All such events should be treated identically at the language level and the runtime level.

I meant by time-triggered scheduling like this: https://en.wikipedia.org/wiki/Time-triggered_architecture

Especially this one: "Use of TT systems was popularized by the publication of Patterns for Time-Triggered Embedded Systems (PTTES) in 2001[1] and the related introductory book Embedded C in 2002.[4]".

The book is freely available from here: https://www.safetty.net/publications/pttes

Here is a nice summary for Analysis Of Time Triggered Schedulers In Embedded System:
https://www.interscience.in/cgi/viewcontent.cgi?article=1014&context=ijcsi
 
The following users thanked this post: DiTBho

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20768
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: event-oriented programming language
« Reply #42 on: January 05, 2023, 04:56:01 pm »
Why 'language' as opposed to 'runtime' or 'library' ?
It's easy to do event-driven programming with any language, so I don't see why you'd need a new one. What problem are you trying to solve ?

That is a good question.

There are multiple answers, none of which are completely compelling (Turing machines and all that!). A few that spring to mind, for example...

While you can do object oriented programming in C (and I did in the early-mid 80s), you can do it more easily and clearly in a (decent) object oriented language (C++ is excluded from that!). For example, xC's constructs strongly encourage event oriented architecture and implementation in the real-time embedded arena.

If a good set of abstractions and concepts are chosen and embodied in a language, then they will guide people to using them effectively. OTOH a library is easier to ignore and or use badly.

A language should enable automated tooling that cannot be achieved with a library. Examples are SPARK (proof of program properties), xC on xCORE enabling calculation of worst case execution times (none of that measure and hope crap!), Java introspection at runtime and in an IDE (think ctrl-space autocompletion) vs that unavailable in C++.

P.S. all those advantages presume the language embodies a good set of abstractions, they are well implemented, and the time and effort is available for the tools to be implemented. If any of those don't apply, then it will usually be preferable to implement the abstractions as a library. Basically it is a damn sight more practical to implement a Domain Specific Library than a Domain Specific Language.

P.P.S. implementing a library requires that the underlying language has suitable behavioural guarantees. Thus good luck implementing threads in C (except recent versions) or in Python or Ruby.
« Last Edit: January 05, 2023, 06:09:41 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20768
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: event-oriented programming language
« Reply #43 on: January 05, 2023, 05:02:28 pm »
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device.  On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete.  We could get more out of the same hardware, if that wait time could be used for something more useful.  To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.

If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.

It needn't be inefficient. All you need is one processor/core per event loop. Cores are cheap nowadays :)

You are so spoiled! I don't have a luxury of having multiple cores :) I work in a single core embedded environment, where the available amount of Flash and RAM are typically very limited (Cheap ARM M3 devices), the device's energy consumption has to be minimized, and battery life-time has to be maximized.

I know which way history is headed. I want to be ahead of the curve :)

More importantly, the current mainstreram languages are insufficient; we will need significant improvements. That means we need to start yesterday!

Quote
Quote
Quote
Using co-operative scheduler and event-driven techniques will typically prevent this. Using patterns like Observer, and Subscribe/Publish are nice techniques to trigger processing only when something needs to be done. Combining these techniques with state machines and event queues, you can create  systems that are responsive. Also, the devices will be quite deterministic in a sense that nothing is happening if there are no events generated, and the device is active only when processing events.

RTOSs are merely a hack to multiplex several processes (i.e. event loops) onto a single execution engine. They are a useful hack when insufficient execution engines are available.

Luxury items again! :) RTOSes need to allocate RAM for each task. As I have only very limited amount of RAM available, I prefer using / have to use a simple co-operative tasker/scheduler which requires only one global stack frame. On some special cases I may use a preemptive scheduler with two tasks: one task for the main application running a co-operative tasker, and the other task for the networking.

See above.

See xCORE processors for embedded hard real-time systems. Currently up to 32cores/chip, and chips can be "paralleled".

Quote
Quote
Quote
For time-critical systems one can use time-triggered scheduling, for example. This kind of scheduling will produce even more deterministic systems, if needed.


There is absolutely nothing special about timeouts: they are merely an event equivalent to a message arriving, an input being available or an output completing. All such events should be treated identically at the language level and the runtime level.

I meant by time-triggered scheduling like this: https://en.wikipedia.org/wiki/Time-triggered_architecture

Especially this one: "Use of TT systems was popularized by the publication of Patterns for Time-Triggered Embedded Systems (PTTES) in 2001[1] and the related introductory book Embedded C in 2002.[4]".

The book is freely available from here: https://www.safetty.net/publications/pttes

Here is a nice summary for Analysis Of Time Triggered Schedulers In Embedded System:
https://www.interscience.in/cgi/viewcontent.cgi?article=1014&context=ijcsi

That doesn't change my contention. All it means is that a processes' mainloop is sitting idly until the tick/timeout event arrives.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: event-oriented programming language
« Reply #44 on: January 05, 2023, 05:21:32 pm »
That doesn't change my contention. All it means is that a processes' mainloop is sitting idly until the tick/timeout event arrives.

Idle yes, but not necessarily running. The MCU may be held in the low-power sleep-state consuming only 1uA or less, while waiting for the next timer tick to occur. This can be used for minimizing device's energy consumption.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20768
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: event-oriented programming language
« Reply #45 on: January 05, 2023, 06:12:26 pm »
That doesn't change my contention. All it means is that a processes' mainloop is sitting idly until the tick/timeout event arrives.

Idle yes, but not necessarily running. The MCU may be held in the low-power sleep-state consuming only 1uA or less, while waiting for the next timer tick to occur. This can be used for minimizing device's energy consumption.

Just so. The xCORE devices do that on a per-core basis, I believe. The equivalent processes in FPGA LUTs will have reduced consumption due to there being no signals changing, but the clock system will still be running full throttle.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline DiTBhoTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: event-oriented programming language
« Reply #46 on: January 05, 2023, 07:06:24 pm »
While you can do object oriented programming in C (and I did in the early-mid 80s), you can do it more easily and clearly in a (decent) object oriented language (C++ is excluded from that!).

yup, even polimorphism in C89 is possible but the language doesn't help, so you have to spend more time to write stuff. I know because I wrote a b+tree library that way.

It's what "X-oriented" means the language helps with the "X set of features/needs".

For example, I can say that my-c is "ICE-testing-oriented" because it adds native support(1) for ICE-testing.

(1) actually, it adds new constructs which help, but - I decided to also restrict the C89 grammar in a specific way so you don't have to later "modify" your code, so if you write something and it compiles, it's already ready for ICE-testing.

This being "specifically written" (C89 grammar restriction) is my personal second meaning of "ICE-oriented".
Perhaps wrong, but it works insanely great with my colleagues  :D
« Last Edit: January 06, 2023, 12:35:23 am by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline DiTBhoTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: event-oriented programming language
« Reply #47 on: January 05, 2023, 11:51:27 pm »
My Atlas MIPS board has a special circuit that disables/enables the clock and issues a "wake up the machine" interrupt (different than reset).

Both the uart card and the network card (DEC-TULIP based) can work autonomously without the CPU in regards to packet acceptance/rejection and fire an interrupt to the CPU when special packets require CPU attention.

It looks a small but interesting working scheme for a simple event-driven skeleton  :D
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6967
  • Country: fi
    • My home page and email address
Re: event-oriented programming language
« Reply #48 on: January 06, 2023, 01:32:36 pm »
Classic imperative patterns are not good, this is probably why no hobbyist wants anything to do with those chips (which are also hard to find, and expensive).
I do prefer free toolchains and relatively affordable development boards –– seeing as I can nowadays get very powerful Linux SBCs for ~ 40-60 € (Amlogic, Samsung, Rockchip SoC chips and chipsets having very good vanilla Linux kernel support, so one is not dependent on vendor forks) –– as a hobbyist myself.

I admit I've been extremely interested in XMOS xCore ever since I first heard of it from tggzzz, but the single-vendor approach with non-open toolchain feels, well, "too risky" or something.  (I've been burned enough times by vendors already, you see, so maybe I'm paranoid.)  And vendor toolchain support for Linux or BSDs tends to be second-tier, which further reduces the value of investment, even if for purely learning and experimentation.

Which nicely leads me to:
Why 'language' as opposed to 'runtime' or 'library' ?
To do better.

Like I mentioned earlier in a reply to Kalvin, none of what I have suggested here leads to "new" machine code; everything stems from already existing patterns.  The problem I'd like to solve, is to express those patterns in a clearer, more efficient manner, and at the same time avoid the known pitfalls in existing low-level languages –– memory safety, and ease of static analysis.

No new abstractions, just easier ways for us humans to describe the patterns in a way compilers can statically check and generate efficient machine code for.

To me, the number of times I've had to argue for MPI_Isend()/MPI_Irecv() –– event-based I/O in MPI; the call initiates the transfer and provides an MPI_Request handle one can examine or wait for completion or error –– against highly-paid "MPI Experts", indicates that whenever imperative approach is possible, it will be used over event-oriented one, because humans.

I myself do not have "libraries" for my event-oriented stuff, I just have patterns (with related unit test cases and examples) I adapt for each use case separately.  I seriously dislike the idea of having one large library that provides such things, because it leads to framework ideation where you do things a certain way because it is already provided for you, instead of doing things the most efficient or sensible way.  Many small libraries, on the other hand, easily lead to (inter-)dependency hell.

Do note I've consistently raised the idea of experimenting with how to express the various patterns, using an imagined language (but at the same time thinking hard about what kind of machine code the source should compile to).  So, there is no specific single pattern I'm trying to recommend anyone to use, I'm pushing/recommending/discussing/musing about how to experimentally discover better-than-what-we-have-now ways of describing the patterns we already use, and build a new language based on that.

If you've ever taken a single course on programming language development, or any computer science courses related to programming languages really, this will absolutely look like climbing a tree ass first.  Yet, I have practical reasons to believe it will work, and can/may/should lead to a programming language that is better suited to our current event-oriented needs on resource-constrained systems than what we have now.

(As to those practical reasons: I've mentioned that I've occasionally dabbled in optimizing work flows by spending significant time and effort beforehand to observe and analyse how humans perform the related tasks.  This itself is a very well documented (including scientific papers, as well as practical approaches done in high-production development/factory environments) problem solving approach.  My practical reasons are thus based on treating programming as a problem solving workflow.  This has worked extremely well for myself (in software development in about a dozen programming languages), so I have practical reasons to expect that this approach, even if considered "weird" or "inappropriate" by true CS folks, will yield very useful results.)
 

Offline DiTBhoTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: event-oriented programming language
« Reply #49 on: January 06, 2023, 02:35:55 pm »
I do prefer free toolchains and relatively affordable development boards –– seeing as I can nowadays get very powerful Linux SBCs for ~ 40-60 € (Amlogic, Samsung, Rockchip SoC chips and chipsets having very good vanilla Linux kernel support, so one is not dependent on vendor forks) –– as a hobbyist myself.

Opensource is limited, and for example, you cannot have the same Ada experience with Gnat that you can have with GreenHills AdaMulti.

Here you need a serious job, AdaMULTI is a complete integrated development environment with serious ICE support for embedded applications using Ada and C with serious support. Just the ICE's header costs 3000 euro, the full package costs 50000 euro.

What do you have with Opensource? Poor man debugger? gdb-stub? umm?
Our my-c-ICE technology is not as great as GreenHills's but it's several light years ahead gdb!

And note: once again, it's not OpenSource!
 
So, my opinion here is clear: you need to find a job in avionics to enjoy the full C+Ada experience.

The same applies with Linux SBC ... they are all the same, over and over, all the same story. Linux bugs, Userland bugs ... nothing of nothing different from the same boring daily experience, just new toys.

The M683xx and MPC840 are great piece of hardware like never seen, and - once again - opensource has ZERO support for their hardware design, whereas the industry has some great stuff.

MPC840 is used in AFDX switches, used from from avionics to naval to high speed railways
M683xx is used by Ford Racing for internal combustion engine

Now, I'd love to find a job which exposes me to the Dyson Digital Motor technology.
I know, their electric car is an epic business failure, but their technology, even at the software side, is great!
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf