Author Topic: Microchip announces PIC64 ... and it's RISC-V.  (Read 9060 times)

0 Members and 1 Guest are viewing this topic.

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3248
  • Country: ca
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #50 on: July 19, 2024, 03:51:52 pm »
Yes they could, but that's a very stupid way to program from both a size and speed perspective. If you have double-linked lists at all then they should have 20 or 30 payload items in each one. Sliding things around inside an array is much cheaper than chasing pointers -- at least assuming you have a cache or VM. But it also gets the pointer overhead down, which is important no matter whether pointers are 4 or 8 bytes.

Obviously, you can rewrite anything without using pointers, unless of course you decide that any array is also a pointer. This will minimize pointer overhead.

The world thought that 32-bit pointers were to small, so they move towards 64-bit pointers. They dream and guess how soon is the time to go to 128-bit pointers. Then just few years later they realize that 64-bit pointers are too big and decide to chop off few bits and use them as a secure pointer signatures to make hacker's life harder. Does RISC-V do this too?
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #51 on: July 19, 2024, 04:32:31 pm »
You mistake me, sir.

I did not suggest to write things without using pointers, I suggested to make each thing that contains pointers also contain (much) more data than pointer.

When I was but an 18 year old pup in my first year at university they put us on a PDP-11/34 with only a primitive line editor to write our code. Both myself and one of my classmates found this intolerable and we independently wrote (in Pascal) simple full-sceen editors.  We both arranged the text file as a linked list of lines, and each line as a linked list of characters, but he put 1 character plus a pointer (2 bytes on PDP-11) in each list node, while I put an array of 6 characters plus 1 pointer in each node, making the whole node take 8 bytes.

His program ran out of memory on relatively small text files, mine didn't.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3248
  • Country: ca
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #52 on: July 19, 2024, 05:51:26 pm »
When I was but an 18 year old pup in my first year at university they put us on a PDP-11/34 with only a primitive line editor to write our code. Both myself and one of my classmates found this intolerable and we independently wrote (in Pascal) simple full-sceen editors.  We both arranged the text file as a linked list of lines, and each line as a linked list of characters, but he put 1 character plus a pointer (2 bytes on PDP-11) in each list node, while I put an array of 6 characters plus 1 pointer in each node, making the whole node take 8 bytes.

I also wrote a text editor in Pascal (mixed with Assembler), although I guess that was much later, for Windows 3.1. It had a double-linked list of lines, but the lines themselves were structures containing flags and a variable-length array for characters. This worked very fast and could edit huge files.
 

Online glenenglish

  • Frequent Contributor
  • **
  • Posts: 458
  • Country: au
  • RF engineer. AI6UM / VK1XX . Aviation pilot. MTBr
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #53 on: July 19, 2024, 08:29:35 pm »
While on the subject of pointers----Now, do we all remember PowerPC?  Wasnt it that was extra C++ friendly, with two offset registers (like a global  and local)  being used  together to get a bit of data ? 
and the local offset could be either 16 or 32 bits. I really liked powerpc assembler, although I didnt do much of it. Why didnt this catch on ?   (no abs addr in ld/st IIRC)

Northguy,
Have you got any urls / papers for me to read discussing secure pointers ? interesting topic.
« Last Edit: July 19, 2024, 08:46:52 pm by glenenglish »
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3248
  • Country: ca
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #54 on: July 19, 2024, 09:00:26 pm »
Have you got any urls / papers for me to read discussing secure pointers ? interesting topic.

It is the PAuth extension in ARM-8.3 (or ARM 8.5?) which allows for that. You can read it in ARM docs, probably here:

https://developer.arm.com/documentation/ddi0487/ka/?lang=en

Apple came the farthest to implement it and created ARM64E (not mandatory yet, but you know Apple):

https://developer.apple.com/documentation/security/preparing_your_app_to_work_with_pointer_authentication

Microsoft does use pacibsp instruction in their System DLLs on Windows on ARM, but it is harmless so far:

https://devblogs.microsoft.com/oldnewthing/20220819-00/?p=107020
 
The following users thanked this post: glenenglish

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3922
  • Country: us
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #55 on: July 20, 2024, 01:39:01 am »
am surprised mfrs are keeping variable length instructions these days, must play merry hell with the prefetcher/pipeline.

It doesnt really.  Risc-V instructions are always multiples of 16 bit, 16 bit aligned, and you can always tell the instructions length from the first 16 bit chunk.  Currently all instructions are either 16 or 32 bits.   Decoding x86 sucks but other variable length instructions are not bad.

On the other hand the improvement in code density can be quite significant.  That improves power efficiency and makes the best use of cache / sram and memory bandwidth.  The logic needed to add compressed instruction support is tiny compared to the savings you get elsewhere.

Quote
Still unsure what is the real advantage of having a 64 bit processor for most embedded, small application projects...

For many none at all, but I have definitely worked on very embedded applications where 64 bit arithmetic would be useful and performance was critical.
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #56 on: July 20, 2024, 05:27:00 am »
am surprised mfrs are keeping variable length instructions these days, must play merry hell with the prefetcher/pipeline.

It doesnt really.  Risc-V instructions are always multiples of 16 bit, 16 bit aligned, and you can always tell the instructions length from the first 16 bit chunk.  Currently all instructions are either 16 or 32 bits.   Decoding x86 sucks but other variable length instructions are not bad.

I wrote something about how to decode RISC-V's mixed instruction widths on very wide CPUs e.g. decoding 16 bytes, 32, 64, 128 etc in parallel. It's slightly harder than decoding arm64's fixed width instruction, but not enough to matter:

https://news.ycombinator.com/item?id=40993502

Quote
On the other hand the improvement in code density can be quite significant.  That improves power efficiency and makes the best use of cache / sram and memory bandwidth.  The logic needed to add compressed instruction support is tiny compared to the savings you get elsewhere.

It's hard to be exact about the size / cost ratio for random logic to implement the 16-bit to 32-bit instruction predecoder vs extra cache and DRAM or flash to store bigger code, and this will vary between microcontrollers (code is in flash), applications processors (code is loaded into DRAM, then cached), and minimal cores doing specialised tasks in an ASIC or FPGA.

But as far as I can tell, by the time you have about 3-4 KB of program code in cache, SRAM, or BRAM, the savings from using the C extension (usually 25% to 30%) is enough to pay for the area of the C decoder. The crossover number would be bigger if you run from flash or DRAM without caching, but only 2 or 3 times bigger.  I guess FPGA vs LUTRAM is the clearest comparison. If you save 1 KB of program code then that's 128 LUT6, which is enough to implement an RVC to RV32I decoder.
 

Online glenenglish

  • Frequent Contributor
  • **
  • Posts: 458
  • Country: au
  • RF engineer. AI6UM / VK1XX . Aviation pilot. MTBr
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #57 on: July 20, 2024, 08:12:09 pm »
So, feeling like I wasnt informed enough for this conversation, my knowledge is of Thumb, Thumb2
I read the short 21 pages of

https://riscv.org/wp-content/uploads/2015/11/riscv-compressed-spec-v1.9.pdf

OK- so the compressed extensions in this case are purely compressions of their full size instructions, IE they simply get expanded, and tas Bruce points out, easily identified.

31% static code size reduction for Linux Kernel (RV64) is substantial improvement-  this is not small beans .

« Last Edit: July 20, 2024, 08:13:42 pm by glenenglish »
 

Online glenenglish

  • Frequent Contributor
  • **
  • Posts: 458
  • Country: au
  • RF engineer. AI6UM / VK1XX . Aviation pilot. MTBr
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #58 on: July 21, 2024, 08:52:26 pm »
What is the business case for a niche space qual device for Microchip ?

Small quantities, small market (unless perhaps Starlink is going to use it)  . ?

The device certainly has broad non-rad environment attractiveness.   Does a company launch this in media  on fanfare and prowess of a space chip and expect commercial industry to adopt based on its cool credentials or rather on the company's cool credentials of this device? The dual lockstep multicore  is special . According to early media info, there will be equivalent silicon for engineering work but that's not the same as saying there will be a commercial version.

for those interested, this document covers quite a bit of ground
https://arxiv.org/pdf/2303.08706
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #59 on: July 21, 2024, 11:53:28 pm »
What is the business case for a niche space qual device for Microchip ?

They (Microsemi) have been a major player in rad-hard and rad-tolerant FPGAs for a long time.

Quote
Small quantities, small market (unless perhaps Starlink is going to use it)  . ?

.. sky high prices, big margins, locked in customers.

They've been selected by NASA to make a replacement for the RAD750, a (max) 200 MHz PowerPC, which sold for $200,000 per chip in the early 2000s and probably more now.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6947
  • Country: fi
    • My home page and email address
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #60 on: July 22, 2024, 02:49:20 am »
I did not suggest to write things without using pointers, I suggested to make each thing that contains pointers also contain (much) more data than pointer.
Yep.  The text editor example is still valid, although rather than running out of memory, having more data per node makes traversal faster, and thus the text editor more responsive.  (In GUI editors, the problem is even more complex, since it boils down to not only finding the bounding rectangle for each displayed glyph, but also being able to discover the interglyph position given any relative 2D coordinate.  Fortunately, windowing toolkits and widget toolkits typically provide that.)

Choosing an efficient data structure and algorithms typically makes an order of magnitude bigger difference than optimizing the same code at the instruction level.  For machine-formatted text like HTML, CSS, obfuscated JavaScript et cetera, the text content may not contain any newlines or paragraph breaks at all, so just storing each line separately is inefficient, too: to insert or delete a single character, the entire file might have to be moved in memory because the entire file is essentially a single line, and that wreaks havoc with CPU and OS-level caching if the file is large, making everything on that core slower than necessary.  Splitting at word boundaries may generate too many words, making traversal slow, too.  There is no good solution that would work well in all situations, so instead you'll want something that can split large nodes and combine small nodes to more optimal sizes as their access patterns change. 

(Interestingly, doing something similar for binary search trees –– balancing the tree instead of splitting and combining nodes to keep the tree structure close to optimal –– lead to red-black tree data structures in the 1970s.)

The exception is data structures describing graphs and networks, where the pointer itself is the data.

As an example, consider the C code I posted about a month ago implementing a dependency manager by describing the dependencies as edges in a directed graph.  It has two abstract data structure types: one for fast lookup by name (hash table entries forming a linked list, payload being a pointer to the dependency graph node, the DJB XOR2 hash of the name, and the name as a string), and the other to describe each node, consisting of three or more pointers (next in two unrelated linked lists, a reference to the name, and any number of directed edges to other nodes), a mutable unsigned integer (tag) used during tree traversal, and the number of unfulfilled dependencies (edges) from this node.  As the pointers are the most interesting data in the nodes, the nodes consist mostly of pointers.

Some of the discussion in that thread following the example explains how sets of such data structures can be efficiently stored in pre-allocated arrays.  I first encountered those in scientific Fortran code, so the technique is decades old already, and well tested in practice in terms of reliability and efficiency.

If we listed all edges in a single one-dimensional array (of unsigned integer values), with the edges sharing the same source consecutive in the array, with each such consecutive set preceded by the number of edges in the set (and optionally the source edge), we'd have a Verlet list, first described by Loup Verlet in 1967.  It is not as amenable to editing (deletion or masking out being important for the dependency graph processing in the example above), but if one uses signed integers with zero reserved for padding and negative values referring to "deleted" targets, and only positive non-zero values to identify each node, it works well even for the abovementioned dependency graph.

I personally don't see any big difference in the Verlet list/array approach and one using C99 structures as nodes with a flexible array member containing pointers to the targets myself.  To me, that difference is just a practical implementation detail, and translating code using one to code using the other is more or less just a mechanical translation and not a creative one.  The important thing to me is the cost of access.  Whether the code uses pointers or element indexes or something else is irrelevant.  What is relevant is the amount of practical work needed (at the machine code level) to arrive at the desired target, in optimal, typical, and pessimal (worst-case) cases.  (It is also why I prefer to examine median access time instead of average access time.  Average is skewed by outliers and thus not at all useful to me, whereas median tells the maximum time taken in at least 50% of the cases.  Depending on the amount of outliers due to noise and testing system behaviour in general, 85%/95%/99% limits can be even more useful and informative.)
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #61 on: July 22, 2024, 04:23:13 am »
(Interestingly, doing something similar for binary search trees –– balancing the tree instead of splitting and combining nodes to keep the tree structure close to optimal –– lead to red-black tree data structures in the 1970s.)

Better B-trees. They were developed for on-disk use, but these days make sense in RAM too.

Making each node use an entire cache block, or a couple of adjacent cache blocks is much better than using tree nodes that are a fraction of a cache block in size. You've read the whole cache block -- better to use all the data you just read, not a small part and then jump randomly somewhere else.

Or even make each node the size of an MMU page. If you're getting into VM paging then that's going to be optimal, but even without that TLB misses and refills can be as expensive or even more than cache misses.

Quote
The exception is data structures describing graphs and networks, where the pointer itself is the data.

[...]

Some of the discussion in that thread following the example explains how sets of such data structures can be efficiently stored in pre-allocated arrays.  I first encountered those in scientific Fortran code, so the technique is decades old already, and well tested in practice in terms of reliability and efficiency.

Doesn't even have to be arrays.

If your complete graph is going to take up less than 4 GB of memory then you can just use a 4 GB (or smaller) heap with a register dedicated to pointing to the base of it and the internal pointers 32 bit offsets from that register. Or if -- as is standard practice these days -- the minimum malloc() is 16 bytes then this technique will work up to a 64 GB heap size.

Well, maybe just scale the 32 bit offset by 8 for a maximum 32 GB heap. All of amd64, arm64, and riscv64 can extend an unsigned 32 bit index to 64 bits, multiply it by 8, add it to a base address, add a further offset (to access a struct field), and load a byte/short/int/long from the final address ... in two instructions:

Code: [Select]
short foo(short *base,  unsigned int index){return *(short*)((long)base+8*(long)index+6);}


foo:
        mov     esi, esi
        movzx   eax, WORD PTR [rdi+6+rsi*8]
        ret

foo:
        add     x8, x0, w1, uxtw #3
        ldrh    w0, [x8, #6]
        ret

foo:
        sh3add.uw       a1,a1,a0
        lh      a0,6(a1)
        ret

With a x16 scale factor x86 and RISC-V expand to three instructions. Arm is still just two.

Conversely, they all still need two instructions just for pointer + 32 bit index even if you don't scale the index, so you might as well scale it...
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2827
  • Country: ca
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #62 on: July 22, 2024, 08:34:55 pm »
devkit is $150 shipping Oct 25th (if ordered today as per https://www.microchip.com/en-us/development-tool/curiosity-pic64gx1000-kit-es ) It's cool to FINALLY see a SoC with something more than PCIE2x1. Also I like that they support LPDDR4, which is significantly easier to route than DDR4.
I wish they'd make higher clocked SKUs of these things, as 625 MHz doesn't sound too exciting. 1GHz and above would be way more interesting.

Online glenenglish

  • Frequent Contributor
  • **
  • Posts: 458
  • Country: au
  • RF engineer. AI6UM / VK1XX . Aviation pilot. MTBr
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #63 on: July 22, 2024, 09:10:56 pm »
In terms of editors and trees, what does VI and nano use internally for allocation?
WHile its all good to talk about gigabytes, there's still us embedded folk running linux in 16MB or 128MB. Of course plenty for text editors even with one element per linked list, but if the editor was thrashing the cache (not full lines in use), then on an embedded system that was doing something moderately real time , the effect of the editor on the real time ish application due to cache eviction (of one element used per list, sparsely allocated, not contiguous)  etc is a significant concern. IE it doesnt really matter the RT executable is running  FIFO etc priority if that at every chance, the lowly editor kicks out all the juice out of the small cache.
« Last Edit: July 22, 2024, 09:27:13 pm by glenenglish »
 

Online glenenglish

  • Frequent Contributor
  • **
  • Posts: 458
  • Country: au
  • RF engineer. AI6UM / VK1XX . Aviation pilot. MTBr
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #64 on: July 22, 2024, 09:17:21 pm »
and this new PIC64 looks identical to the existing Polarfire SoC FPGA with the FPGA chopped off the side.
so the tools probably at least function....
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #65 on: July 23, 2024, 12:53:52 am »
WHile its all good to talk about gigabytes, there's still us embedded folk running linux in 16MB or 128MB.

Heh. As the person who raised editor data structures ... the machine we were using was a 16 bit PDP-11/34 which naturally meant a maximum of 64k for each program, but in fact I think in practice the heap couldn't be over 32k. Furthermore, the entire machine had only 256k of RAM shared between 22 users on 22 Visual 100 terminals (plus two LA120 printers), a 5 MB disk pack to store the OS and another 5 MB disk pack to store students' home directories. From memory, you couldn't log off if you had more than 32k of disk files -- you had to delete some first.

And we thought ourselves lucky. 1st year students at all other universities in the country were using punched cards.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6947
  • Country: fi
    • My home page and email address
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #66 on: July 23, 2024, 03:14:01 am »
what does nano use internally for allocation?
A doubly-linked list of lines, the content on each line being consecutive in memory.  See src/definitions.h:typedef struct linestruct near line 475 or so.  However, nano is relatively recent (replacement for the pico editor as used by e.g. pine mail client), first released in 1999, at which point typical workstations were already 32-bit with megabytes of memory available.

Only a decade later, 64-bit workstations were already common.  (I mention this because in 2011, I showed how you can use virtual memory to memory-map a terabyte data structure (1,099,511,627,776 bytes) into virtual memory and manipulate it as if it was all stored in RAM in Linux on any 64-bit architecture.  Didn't even need a gigabyte (one thousandth) of that of RAM to work well, too.)

The way e.g. Apache handles dynamic content (coming from CGI programs, a reverse proxy, or internally via filters) is more interesting.  You'll find it in library form in Apache Portable Runtime library, apr-util: Bucket Brigades.  It also uses memory pools, apr-util: Memory Pools, for dynamic allocation, so that instead of tracking and releasing/freeing each dynamic allocation separately, each pool can be released/freed at once.  (Such pool approach is very efficient for service daemons, as each connection can use their own pool, and that pool released when the connection is closed.  We could discuss whether e.g. Boehm GC would be even more effective, but the pool approach makes per-connection resource management easier.)

While its all good to talk about gigabytes, there's still us embedded folk running linux in 16MB or 128MB.
Yep, me included.  I have several routers and even a Mikrotik RBM33G (MT7621A SoC, MMIPS architecture; running OpenWRT, and plenty (256 MB) of RAM).  I remember well the days when I used a Power Macintosh 7200 with 32 MB of RAM to scan and edit very large full-color images and text files in MacOS 7.5.3 very effectively and having fun, too.  I know from experience things are possible; it is just a matter of effort, organization, choosing correct algorithms and approaches, and being willing and able to do the work needed.

I also want my userspace applications to tell me when they detect my precious data might have been lost or garbled.  Yet, whenever I implement something like
    if (close(fd) == -1) {
        fprintf(stderr, "%s: Error closing file: %s.\n", filename, strerror(errno));
        exit(EXIT_FAILURE);
    }
I still have to defend it from clueless developers and users examining the code, who don't understand that while this will never trigger for desktop users using one of the most used filesystems, it can still hit users using FUSE or filesystems using delayed flushing like NFS, and is thus worth the "overhead".  And if I precede that with
    if (fdatasync(fd) == -1) {
        fprintf(stderr, "%s: Error syncing data: %s.\n", filename, strerror(errno));
        exit(EXIT_FAILURE);
    }
for the most important data files, I always have to defend having it there, even if everybody involved had already agreed the data was important and we should make sure the OS tells us it has been correctly stored in the filesystem before we exit.  It's pretty depressing, really.

I think it is a cultural issue.  Robust, secure, and efficient code is just not appreciated or considered important anymore; getting new product out quick and often with glitzy features and fanservice is much more important.  Bugs can be fixed and security added on top later on (if we happen to win an intergalactic jackpot at some point).
 

Online glenenglish

  • Frequent Contributor
  • **
  • Posts: 458
  • Country: au
  • RF engineer. AI6UM / VK1XX . Aviation pilot. MTBr
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #67 on: July 23, 2024, 03:26:32 am »
interesting

On memory pools. I only started splitting up my memory pools (per your example of APache maintainging multiple owned pools)  -  I only started splitting up my memory pools when I starting including webservers in my devices  (FreeRTOS) and had to avoid situations where tight memory availability together with HTTP dynamic situations, and hung up HTTP sessions etc could only hurt those that caused the problem, as the meat of the application allocated all its buffers  and vars it needed at start of runtime and they'd never be revisited apart from the occassional  very large sprintf debug/ journaling job (where again I could have used a static allocation for sprintf instead of a malloc  - but theres a point in life where you say F**k It, I want to be concentrating on the algorithms and not have to concentrate on  having this device survive on an oily rag of memory.

As for  "Robust, secure, and efficient code is just not appreciated or considered important anymore;"
do you think this is the domain of old folk (like me) that used to work inside 4k  of 2114 ram on 6800s ???. Well probably not precisely, since there was really no concept of secure back then. I think Robust and efficient is a cultural thing needing to be mentored into people.  As for efficient - Back in 1978, efficient was all that would fit... 
Or is this about that  few people that work on stuff that  it is very bad if it fails to work as required ?     glen.
« Last Edit: July 23, 2024, 04:02:44 am by glenenglish »
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6947
  • Country: fi
    • My home page and email address
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #68 on: July 23, 2024, 05:17:21 am »
[...] I want to be concentrating on the algorithms and not have to concentrate on having this device survive on an oily rag of memory.
Even when you have lots of memory, you can still have even more active connections, and start to suffer from all sorts of buffer issues, especially when the content is generated dynamically or reverse proxied from another device... I too have gone down this rabbit hole, so deep that I eventually emerged with a simple two-part proposal for replacing the underlying protocol and the HTML syntax with better ones, a combined one that needs very little buffering and allows multiplexing multiple data streams (like metadata and content, or even multiple requests in parallel; especially useful for TLS-secured connections).  One day, I might be ready to try and publish them.

As for  "Robust, secure, and efficient code is just not appreciated or considered important anymore;"
do you think this is the domain of old folk (like me) that used to work inside 4k  of 2114 ram on 6800s ???. Well probably not precisely, since there was really no concept of secure back then. I think Robust and efficient is a cultural thing needing to be mentored into people.  As for efficient - Back in 1978, efficient was all that would fit... 
Or is this about that  few people that work on stuff that  it is very bad if it fails to work as required ?     glen.
The recently published Linux Foundation survey on software security education is a very good read to see how Linux developers self-report about security.

As to robust and efficient: About three decades ago in mid-90s Daniel J. Bernstein published qmail, a rock solid email transfer agent, at the same time when Sendmail was king and as secure as an empty car running with driver's door ajar and a wallet in the passenger seat in a bad neighborhood.  Every time I interacted with other sysadmins, I had to defend my use of qmail instead of sendmail, for no other reason except "everybody else is using sendmail so it must be superior".  Similarly for djbdns/bind – although they are not exactly equivalent in terms of features; djbdns is better suited for smaller and simpler installations.  And I still use the DJB2 Xor hash for hashing human-readable names when examining the length of the name; it's an excellent hash especially on embedded targets for that.  I do not know DJB as a person, but I definitely appreciate his work output, because it is proof that robustness and efficiency is achievable in practice.  Unfortunately, it is also proof that if you manage to do so, you'll be targeted by those who are either unable or unwilling to achieve the same, and basically vilified.

For my part, I've resigned to point out and help in how to do better when someone obviously wants to do better, but it is becoming a chore as I always end up having to defend "useless error checks" and "paranoia" from those who believe security and bugs is what you may work on after you've sold the product, not before.  In particular, I believe that even beginners should include all the necessary error checks, and work on writing useful comments (describing why and how the author intends the code to work, instead of describing what the code obviously does), because if you learn to write bad code with no comments, it is double the work to un-learn and re-learn to write good maintainable code with useful comments later on –– and most developers just don't bother to, because they're only human.  Me too: I still struggle to write better comments.

I'm not sure how this all relates to RISC-V.

I feel it is an useful development, combining a huge amount of experience and knowledge into a new architecture design that all can use, somewhat similar to how free/open software is an attempt to avoid similar experience and knowledge from being locked up in dead-end products and rediscovered again and again; and I think there is some connection to the cultural/education issues or changes needed to truly exploit the possibilities we have here, but it is just a feeling or an opinion I am not clear or certain about yet.  I do wish it will soon-ish (say, within the decade) get similarly open competitors that approach the problem from a completely different angle, perhaps directed more towards hard real-time and massively concurrent tasks (somewhat similar to xCORE, NPUs, or general purpose GPU computing); if not for any other benefit, then just for the competition that will keep the field alive and avoid stagnation.  I'm tempted to extend the concept into new programming languages, but that's another can of worms that tends to end up derailing threads... suffice it to say that programming languages have added many new useful abstractions on top, and compiler technology has progressed with leaps and bounds especially in optimization, but at the core, there has been very little true evolution at the core of programming languages, even though our needs (especially wrt. parallel and distributed computing) are changing.
« Last Edit: July 23, 2024, 05:22:07 am by Nominal Animal »
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3248
  • Country: ca
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #69 on: July 23, 2024, 03:46:59 pm »
Robust, secure, and efficient code is just not appreciated or considered important anymore

Many people consider that "efficient" means using as little resources as possible. Hence, they think if you have lots of resources, there's no need to be efficient. That's the main reason the efficiency is not considered important nowadays.

I believe that "efficient" means using the available resources to maximize your benefits. For example, if you use more memory to simplify your algorithms making them faster and more reliable, this makes your code more efficient. On the other hand, if you waste resources on various forms of bloat, this is not efficient because resources are used, but there's no benefit. Similarly, if you leave the majority of resources unused, this cannot be considered efficient because the unused resources don't bring any extra benefits.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6947
  • Country: fi
    • My home page and email address
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #70 on: July 23, 2024, 06:14:26 pm »
I believe that "efficient" means using the available resources to maximize your benefits.
It is an acceptable definition, yes.  To me, it is about efficiency and robustness as a tool.

Even in the example I mentioned, by choosing a better approach one can use more CPU resources but less wall clock time (by reading data into a self-sorting structure instead of reading all data first and then applying a sorting algorithm, because transfer from storage takes more time than sorting does), if that is what makes the users' work flows more efficient.  (In that sense, for a proprietary embedded device, people might disagree as to whose benefits should be maximized: the vendor/provider, or the end user, for the device to be "efficient".)

The issue, as I see it, is that "efficiency" in any definition just isn't a concern.  Even user data collection is based on what the vendor can get away with, rather than what the vendor actually needs, with the data needed for debugging known issues appended at end, often as an afterthought in later revisions.

"Robustness" is not a concern either.  I consider it to be the ability to deal with unexpected data and situations, and to report any causes for concern including possible loss of data.  For embedded devices and microcontrollers without user-accessible logging facilities, we do need to use a slightly different definition, perhaps the ability to recover from problem situations with minimal effect on the human users and whatever data/control the device deals with.

I personally use these terms in the generic sense, to be defined in detail for each product/tool, depending on how the product/tool is used.
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: Microchip announces PIC64 ... and it's RISC-V.
« Reply #71 on: August 02, 2024, 03:45:29 am »
The UK leaving the EU necessarily results in UK products being sold into the EU getting import duties.

Some company located in the UK also have a warehouse in ireland, which is import-free to Europe.
If you are lucky and find they have, you can ask to have your goods shipped not from the UK but from Ireland  :D



The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf