Author Topic: RISC-V assembly language programming tutorial on YouTube  (Read 63014 times)

0 Members and 6 Guests are viewing this topic.

Offline richardman

  • Frequent Contributor
  • **
  • Posts: 427
  • Country: us
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #175 on: December 17, 2018, 02:40:34 am »
re: compact 68K code

I may be one of the few, but I like the idea of split register sets in the 68K. Compilers have no problems with A vs. D registers, and at worst it takes one extra move. With that, you can save one bit for register operand specifier. It all can add up.

...and we know that CISC ISA like the x86 can be decoded into micro-RISC-ops, so wonder what a highly tuned 68K, or for that matter, PDP-11/VAX-11 micro-architecture could be like. We can throw away the flags ~_o if  they make a difference and add a couple instructions as mentioned.
// richard http://imagecraft.com/
JumpStart C++ for Cortex (compiler/IDE/debugger): the fastest easiest way to get productive on Cortex-M.
Smart.IO: phone App for embedded systems with no app or wireless coding
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #176 on: December 17, 2018, 03:11:46 am »
re: compact 68K code

I may be one of the few, but I like the idea of split register sets in the 68K. Compilers have no problems with A vs. D registers, and at worst it takes one extra move. With that, you can save one bit for register operand specifier. It all can add up.

Yes, it's quite a natural split, worked well, and seldom caused any problems.

The main problem is just that it pre-determines that every program will want a 50/50 split of data values and pointers and that's not usually the case -- you usually want a lot fewer pointers. It's more or less ok with 8 of each, especially as a couple of address registers get used up by the stack pointer and maybe a frame pointer and a pointer to globals. But I don't think 16 of each would work well.

Quote
...and we know that CISC ISA like the x86 can be decoded into micro-RISC-ops, so wonder what a highly tuned 68K, or for that matter, PDP-11/VAX-11 micro-architecture could be like. We can throw away the flags ~_o if  they make a difference and add a couple instructions as mentioned.

You might be interested in:

http://www.apollo-core.com/

The basic 68000 (or at least 68010) instruction set is good.

The main problem it had was they went in a bad direction with complexity in the 68020 just because, y'know, it's microcode and you can do anything. They had to back away from that in the 68040 and 68060.

Well, maybe the main problem it has was that is was proprietary and owned by a company that stopped caring about it enough to put in the necessary investment. And then Motorola did that *again* with the PowerPC, not putting in the investment necessary to give Apple mobile chips competitive with the Centrino -> Core 2 and forcing Apple into Intel's arms. (IBM's G5 and successors are just fine for professional desktop systems)

Is ColdFire still a thing? It doesn't seem to have had any love since about 2010.

Wikipedia says it topped out at 300 MHz, and it does around 1.58 Dhrystone VAX MIPS/MHz (slightly less than Rocket-based RISC-V).

OK, element14 is showing me 76 SKUs with a maximum 250 MHz but mostly at 50 or 66 MHz. So it's still a thing.
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4314
  • Country: us
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #177 on: December 17, 2018, 03:24:47 am »
Quote
before you get to your push multiple, first the core has read the vector table and fetch the first instruction of the ISR (prolog) done automatically it can often be done in parallel
I'm not convinced.  CM0 is listed as von Neuman architecture with both flash and ram connected to the same memory bus matrix.  And it always has to save the PC, anyway, so if it could do simultaneous vector fetch (one word) and PC save (one word), it would be caught up by then, more or less.  (and ... I would tend to relocate the vector table to RAM, anyway.)

Quote
Quote
Microchip was defining those symbols at link time
That's true. Although it's not a very good idea.
I actually asked Microchip about it.  They said it let them distribute binary libraries that worked across a range of chips (with identical peripherals at different locations.)  That makes some sense - it's a good thing that disk space is cheap with many vendors distributing close to one library per chip.  (OTOH, not entirely happy with the idea of binary-only libraries.)
Quote
I like the idea of split register sets in the 68K.
It seems to work OK on the 68k.  Partially because there were a lot of them (16 each, right?)  The crop of 8bit chips with "we have TWO index registers!  PLUS a stack!" was depressing...I'm not sure that it buys you much from a hardware implementation PoV - can't you pretty much use the same instruction bit you used to pick "Address or Data" to address twice as many GP registers?  (I don't quite remember which instructions were different between A/D registers.)  Maybe some speed-up from having separate banks?  (there's an idea for optimization: "we have 32 registers organized in 4 banks of 8.  Operations that use registers from different banks can be more easily parallelized..." (Lots of CPUs have done this with Memory.  The Cray-1, for instance: "write your algorithm so that you access memory at 8-word intervals", or something like that.)  Or Disk (remember "interleaving"?))
Quote
wonder what a highly tuned 68K or PDP-11 ... could be like.
Yeah.     I wonder what the internal architecture of the more recent Coldfire chips is like; my impression is that that's about what they've done...The PDP-10 emulator "using an x86 for its microcode interpreter" apparently ran something like 6x faster than the fastest DEC ever built.  (and that was a decade or two ago, I think.)
 

Offline richardman

  • Frequent Contributor
  • **
  • Posts: 427
  • Country: us
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #178 on: December 17, 2018, 04:34:33 am »
I don't think there is any new ColdFire implementation in YEARS. Possibly process shrink, if that.

Once all the old HP printer models that used ColdFire are EOL'ed, then that will probably be the end of the line... Oh wait, they are also used in automotive, and those go forever as well. Heck, Tesla *might* have used CPU12 in their cars.

re: 68K registers
It's 8 registers each for address and data.

Motorola had junked so many processor architectures in the 2000s that it's not even funny. 88K was one, and there's also mCore. By the look of it, it should have been competitive, but when even their own phone division wouldn't use it, that's the end.
// richard http://imagecraft.com/
JumpStart C++ for Cortex (compiler/IDE/debugger): the fastest easiest way to get productive on Cortex-M.
Smart.IO: phone App for embedded systems with no app or wireless coding
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17200
  • Country: us
  • DavidH
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #179 on: December 17, 2018, 04:48:00 am »
I may be one of the few, but I like the idea of split register sets in the 68K. Compilers have no problems with A vs. D registers, and at worst it takes one extra move. With that, you can save one bit for register operand specifier. It all can add up.

Quote
...and we know that CISC ISA like the x86 can be decoded into micro-RISC-ops, so wonder what a highly tuned 68K, or for that matter, PDP-11/VAX-11 micro-architecture could be like. We can throw away the flags ~_o if  they make a difference and add a couple instructions as mentioned.

The 68K had ISA features like double indirect addressing which made it even worse than x86 when scaled up.  The separate address and data registers was one of those features although I do not remember why now.
« Last Edit: December 17, 2018, 08:12:15 pm by David Hess »
 

Offline andersm

  • Super Contributor
  • ***
  • Posts: 1198
  • Country: fi
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #180 on: December 17, 2018, 05:49:44 am »
Cortex M3, M4, M7 all have 12 cycle interrupt latency (M0 has 16). It's sitting there writing those eight registers out at one per clock cycle, exactly the same as you could do yourself in software.
The Cortex-M hardware prologue also sets up nested interrupts. So while they have relatively long interrupt latencies, you also get a decent amount of functionality out of those cycles.
« Last Edit: December 17, 2018, 06:44:24 am by andersm »
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #181 on: December 17, 2018, 06:01:36 am »
Motorola had junked so many processor architectures in the 2000s that it's not even funny. 88K was one, and there's also mCore. By the look of it, it should have been competitive, but when even their own phone division wouldn't use it, that's the end.

I just took a close look at the ISA. It's a pretty clean very RISC with fixed-length 16 bit opcodes. No addressing modes at all past register plus (very) short displacement, but it has special instructions designed to help create effective addresses quickly e.g. rd = rd + 4*rs.

Chinese company C-SKY makes a series of CK6nn chips that use the M-CORE instruction set.

They also have a CK8nn series of chips that use a 16/32 bit opcode ISA called C-SKY V2. I'm not sure if it's just an extension of the 600-series ISA.

Anyway, they're switching to RISC-V.

The problem with Motorola ISAs -- 68k, 88k, PowerPC (with IBM), M-CORE -- isn't a technical one. It's that if you tie your company to them then you have a huge risk of being orphaned within a decade.

This, more than any technical superiority, is one of the things that makes RISC-V so attractive.
 

Offline richardman

  • Frequent Contributor
  • **
  • Posts: 427
  • Country: us
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #182 on: December 17, 2018, 08:00:26 am »
...It's that if you tie your company to them then you have a huge risk of being orphaned within a decade.

This, more than any technical superiority, is one of the things that makes RISC-V so attractive.

No disagreement from me. I think if some companies back RISC-V based MCU, that would give ARM a serious competition. Of course, finding such company could be difficult. A Chinese company maybe a possibility.
// richard http://imagecraft.com/
JumpStart C++ for Cortex (compiler/IDE/debugger): the fastest easiest way to get productive on Cortex-M.
Smart.IO: phone App for embedded systems with no app or wireless coding
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #183 on: December 17, 2018, 08:38:42 am »
The 68K had ISA features like double indirect addressing which made it even worse than x86 when scaled up.  The separate address and data registers was one of those features although I do not remember why now.

Not in the 68000. The 68020 did that, and even Motorola later realised it's a mistake.

Having memory-to-memory arithmetic is a problem for fast implementations though even in base 68000. x86 stops at reg-mem and mem-reg.

Of course neither one is a problem on big high end implementations that can break it into a bunch of uops and let OoO machinery chew on them.
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #184 on: December 17, 2018, 11:37:01 am »
Motorola had junked so many processor architectures in the 2000s that it's not even funny. 88K was one

Data General did a few computers based on 88K, and also a couple of companies located in Japan made 88K-computers (see OpenBSD supported machines), but they were a niche, and their orders weren't so consistent in term of money.

The 88000 appeared too late on the marketplace, later than MIPS and SPARC, and since it was not compatible with 68K it was not competitive at all: Amiga/classic? 68k! Atari ST? 68k! Macintosh/classic? 68k!

In short, Motorola was not happy because they had problems at selling the chip.

Now I know that the 88K was abandoned after the Dash prototype when Motorola was collaborating with the Standford University. It sounds like the last chance to put a foot into the supercomputers field, which was niche but with a lot of money involved, and yet again ... bad luck, since for some obscure reason, someone preferred to go on with MIPS instead than with 88K.

Was it the last lost occasion? Definitively YES, since someone with a lot of money, someone like Silicon Graphics, chose the use the Dash technology combined with MIPS and this was the beginning of the CrayLink2,3,4, ... SGI-supercomputers, yet again a lot of money back.

In such a scenario there was no choice for Motorola: 88k project dropped!

As far as I have understood, IBM was working on S/370 since a long while, and their researching was on the IBM 801 chip, which was the first POWER chip, so ... to make the money Motorola promoted a collaboration with Apple and IBM, which then developed the first PowerPC chip: the MCP601 appeared in 1992, sort of hybrid chip between POWER1 spec and the new PowerPC spec.

This way managers in Motorola were happy. Anyway, this didn't work so long, they these companies drop the collaboration.

Now IBM is on POWER9 which is funded by DARPA, which means a lot of money for IBM. POWER9 workstations and servers are very expensive. Say the entry level for the low spec workstation is no less than 5K USD  :palm:
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #185 on: December 17, 2018, 12:07:33 pm »
The 88000 appeared too late on the marketplace, later than MIPS and SPARC, and since it was not compatible with 68K it was not competitive at all: Amiga/classic? 68k! Atari ST? 68k! Macintosh/classic? 68k!

If 2015 is not too late (or 2012 for Arm64) then 1990 was certainly not too late.

88000 is an excellent ISA, even today, and if someone put good engineers on to making chips and good marketers on to selling it then it could be competitive.

Particular chips have a short lifespan, but a good ISA can be good for 50 years. The main thing is to *start* with a plan for compatible 16, 32, 64, 128 bit pointer/integer successors.

If I've done my arithmetic correctly, if you could somehow store 1 bit on every atom of silicon (or carbon, or ...), then 2^128 bytes of storage would need 100,000,000,000 tonnes of silicon. That's a cube a bit over 40 km on a side.

128 bits is probably going to be enough for a while, even with sparse address spaces.

Quote
In short, Motorola was not happy because they had problems at selling the chip.

No one's fault but Motorola. Great engineers, awful management.

Quote
As far as I have understood, IBM was working on S/370 since a long while, and their researching was on the IBM 801 chip, which was the first POWER chip

No, that's not correct. The IBM 801 was the world's first RISC chip (though that name wasn't invented by Dave Patterson until several years later when he independently came up with the concept) but it's very different to POWER/PowerPC. For a start, it had both 16 bit and 32 bit opcodes that could be freely mixed, an important code density feature that didn't find its way back into RISC until Thumb2 in 2003.
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #186 on: December 17, 2018, 12:27:33 pm »
If 2015 is not too late (or 2012 for Arm64) then 1990 was certainly not too late.

It was too late for RISC workstations, due to SPARC and MIPS ones, already promoted and used before Motorola released the 88k, and it was too late for supercomputers, yet again due to MIPS at SGI.

If the "Dash/88k" project at Standford University or the MIT "T/88110MP" project hadn't had failed (at the management level, not at the technical level) ... but they did.

This is a fact!

The IBM 801

801 was a proof of concept, made in 1974. But POWER and PowerPC are derived from the evolution of this POC. Directly and indirectly, since, of course, in 1974 "RISC" was not as we know today, but the idea was already there in the simulator of the first 801. It's written in every Red, Green, and Blue-book published by IBM.
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #187 on: December 17, 2018, 12:44:34 pm »
What would happen to IBM-POWER9 if DARPA didn't fund it?
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #188 on: December 17, 2018, 01:10:37 pm »
The IBM 801

801 was a proof of concept, made in 1974. But POWER and PowerPC are derived from the evolution of this POC.

Derived, certainly. But very different.

Btw, the project formally started in October 1975 though some investigation work had been done before that. The first running hardware was in 1978.
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #189 on: December 17, 2018, 01:18:48 pm »
My IBM Red, Green, and Blu books (sort of encyclopedia about POWER and PowerPC) point to this article.

Probably to underline that one of their men, mr.Cocke, received the Turing Award in 1987, the US National Medal of Science in 1994, and the US National Medal of Technology in 1991  ;D

To me, it sounds sort of "hey? we are IBM, you might know us for the ugliest thing ever invented - IBM-PeeeeCeeeeee - PersonalComputers and IBM-PC-compatibles computers - which are really shitty, but we also do serious stuff. Don't you believe our words? See that one of our prestigious men received an award for having invented the RISC before any H&P book started talking about it".

IBM is really funny :D
« Last Edit: December 17, 2018, 01:27:27 pm by legacy »
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #190 on: December 17, 2018, 01:37:48 pm »
My IBM Red, Green, and Blu books point to this article.

Yup, that ties in with my sources. In 1974 they wanted to make faster telephone exchanges (my sources said they wanted to handle 300 calls per second, and decided 12 MIPS was needed for that), they did some thinking and wrote effectively a White Paper, did some preliminary design on an instruction set, and then got approval and funding and the 801 project formally kicked off in October 1975.

Your article says first hardware was 1980 compared to my previous message that says 1978 (and your message that 801 was "made in 1974"). I believe 1980 was first production hardware for deployment, or possibly the 2nd prototype after they got experience with the first one and made changes.

One of the changes was dropping the variable length 16/32 bit instructions and going with 32 bit only -- mostly because they needed to support virtual memory in the production model and didn't want to have to support instructions crossing a VM page boundary. The 2nd version also increased the number of registers from 16 to 32, and increased the register size (and addresses) from 24 bits to 32 bits. They also changed from destructive 2-address instructions to 3-address, so although instructions increased in size from an average of about 3 bytes each (common for Thumb2 and RISC-V these days too) to exactly four bytes each, programs needed fewer instructions so the increase in program size was less than 33%.

Quote
Probably to underline that one of their men, mr.Cocke, received the Turing Award in 1987, the US National Medal of Science in 1994, and the US National Medal of Technology in 1991  ;D

Indeed he did, and very well deserved.
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #191 on: December 17, 2018, 01:39:47 pm »
See that one of our prestigious men received an award for having invented the RISC before any H&P book started talking about it

I see you edited your message while I was replying to it.

H&P had to wait until 2017 to receive their Turing Awards.

Quote
“The main idea is not to add any complexity to the machine unless it pays for itself by how frequently you would use it. And so, for example, a machine which was being used in a heavily scientific way, where floating point instructions were important, might make a different set of tradeoffs than another machine where that wasn't important. Similarly, one in which compatibility with other machines was important or in which certain types of networking was important would include different features. But in each case they ought to be done as the result of measurements of relative frequency of use and the penalty that you would pay for the inclusion or non-inclusion of a particular feature.”

Joel Birnbaum
FORMER DIRECTOR OF COMPUTER SCIENCES AT IBM
“Computer Chronicles: RISC Computers (1986),”
October 2, 1986

Now there is a guy absolutely on the same page as H&P. (And the people who invented RISC-V: namely P and his students, and his students' students. H is a fan too.)


« Last Edit: December 17, 2018, 01:54:55 pm by brucehoult »
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3248
  • Country: ca
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #192 on: December 17, 2018, 06:05:18 pm »
I compiled them the way they came.

It doesn't matter if you deliberately tweaked the compiler options and offsets to make RISC-V look good, or they magically came out this way. The problem is that your tests do not reflect reality, but rather a blunder of inconsequential side effects.

If you tweak the offsets a different way, and use the default Makefile from my computer, the whole thing goes from this:

Code: [Select]
000001b5 <main>:
 1b5:   e8 20 00 00 00          call   1da <__x86.get_pc_thunk.ax>
 1ba:   05 3a 1e 00 00          add    $0x1e3a,%eax
 1bf:   8b 80 0c 00 00 00       mov    0xc(%eax),%eax
 1c5:   81 88 94 00 00 00 80    orl    $0x80,0x94(%eax)
 1cc:   00 00 00
 1cf:   81 88 c8 00 00 00 00    orl    $0x1000,0xc8(%eax)
 1d6:   10 00 00
 1d9:   c3                      ret   

000001da <__x86.get_pc_thunk.ax>:
 1da:   8b 04 24                mov    (%esp),%eax
 1dd:   c3                      ret   

00002000 <PORT>:
    2000:       00 f0                   add    %dh,%al

to this:

Code: [Select]
08048450 <main>:
 8048450: a1 c0 95 04 08        mov    0x80495c0,%eax
 8048455: 83 48 30 20          orl    $0x20,0x30(%eax)
 8048459: 83 48 40 20          orl    $0x20,0x40(%eax)
 804845d: c3                    ret   

For what it worth, it's now 14 bytes for i386 (plus 4 bytes data, of course), which is now a leader, way better than Motorola, and leaving RISC-V absolutely in the dust.

Here's the tweaked C code:

Code: [Select]
#include <stdio.h>
#include <stdint.h>

#define PORT_PINCFG_DRVSTR (1<<5)

struct {
    struct {
        struct {
            uint32_t reg;
        } PINCFG[16];
        struct {
            uint32_t reg;
        } DIRSET;
    } Group[10];
} *PORT = (void*)0xdecaf000;

void main(){
    PORT->Group[0].PINCFG[12].reg |= PORT_PINCFG_DRVSTR;
    PORT->Group[0].DIRSET.reg |= 1<<5;
}

Here's the line from the Makefile:

Code: [Select]
gcc a.c -o c -save-temps -O1 -fomit-frame-pointer -masm=intel

Here's the assembler output

Code: [Select]
.file "a.c"
.intel_syntax noprefix
.text
.globl main
.type main, @function
main:
mov eax, DWORD PTR PORT
or DWORD PTR [eax+48], 32
or DWORD PTR [eax+64], 32
ret
.size main, .-main
.globl PORT
.data
.align 4
.type PORT, @object
.size PORT, 4
PORT:
.long -557125632
.ident "GCC: (GNU) 4.5.0"
.section .note.GNU-stack,"",@progbits




 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #193 on: December 18, 2018, 02:01:52 am »
I compiled them the way they came.

It doesn't matter if you deliberately tweaked the compiler options and offsets to make RISC-V look good, or they magically came out this way. The problem is that your tests do not reflect reality, but rather a blunder of inconsequential side effects.

If you tweak the offsets a different way

Oh come on. You not change only change the data structure (which I freely admit I made up at random, as westfw didn't provide it) to be less than 128 bytes to suit your favourite ISA, you *ALSO* change the bit offsets in the constants to be less than 8 so the masks fit in a byte. If you hadn't done *both* of those then your code would have 32 bit literals for both offset and bit mask, the same as mine, not 8 bit. You also changed the code compilation and linking model from that used by all the other ISAs, which would all benefit pretty much equally from the same change.

And you accuse me of bad faith?
« Last Edit: December 18, 2018, 02:05:28 am by brucehoult »
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4314
  • Country: us
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #194 on: December 18, 2018, 02:37:12 am »
Quote
[CM0 and limitations on offsets/constants, making assembly unpleasant]
(I did specifically choose offsets and bitvalues to be "beyond" what CM0 allows.)

As another example, I *think* that the assembly for my CM0 example (the actual data structure is from Atmel SAMD21, but it's scattered across several files) can be improved by accessing the port as 8bit registers instead of 32bit.  All I have to do is look really carefully at the datasheet (and test!) to see if that actually works, rewrite or obfuscate the standard definitions in ways that would confuse everyone and perhaps not be CMSIS-compatible, and remember to make sure that it remains legal if I move to a slightly different chip.

Perhaps I have a high bar for what makes a pleasant assembly language.
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3248
  • Country: ca
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #195 on: December 18, 2018, 03:50:18 am »
Oh come on. You not change only change the data structure (which I freely admit I made up at random, as westfw didn't provide it) to be less than 128 bytes to suit your favourite ISA, you *ALSO* change the bit offsets in the constants to be less than 8 so the masks fit in a byte. If you hadn't done *both* of those then your code would have 32 bit literals for both offset and bit mask, the same as mine, not 8 bit. You also changed the code compilation and linking model from that used by all the other ISAs, which would all benefit pretty much equally from the same change.

I restored the offsets to where they were in the original code. I restored the linkage to normal. Masks I admit. But the masks are not important because you can achieve the same effect with byte access, thus the mask never should be more than 8 bits. Of course, the superoptimized C compiler couldn't figure that out, so I had to nudge masks a bit. When we get better compilers, there will be no need to tweak masks, right?

The $1M question is. How is my tweaking is any worse than yours?

And you accuse me of bad faith?

Of course not.
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #196 on: December 18, 2018, 04:05:39 am »
Quote
[CM0 and limitations on offsets/constants, making assembly unpleasant]
(I did specifically choose offsets and bitvalues to be "beyond" what CM0 allows.)

As another example, I *think* that the assembly for my CM0 example (the actual data structure is from Atmel SAMD21, but it's scattered across several files) can be improved by accessing the port as 8bit registers instead of 32bit.  All I have to do is look really carefully at the datasheet (and test!) to see if that actually works, rewrite or obfuscate the standard definitions in ways that would confuse everyone and perhaps not be CMSIS-compatible, and remember to make sure that it remains legal if I move to a slightly different chip.

Perhaps I have a high bar for what makes a pleasant assembly language.

x86, 68k and VAX were all designed at a time when maximizing the productivity of the assembly language programmer was seen as one of the highest (if not actual highest) priorities. They'd gone past simply trying to make a computer that worked and even making the fastest computer and come to a point that computers were not only fast *enough* for many applications but had hit a speed plateau. (It's hard to believe now that Apple sold 1 MHz 6502 machines for over *seventeen* years, and the Apple //e alone for 11 years.)

What they had was a "software crisis". The machines had quirky instruction sets that were unpleasant for assembly language programmers -- and next to impossible for the compilers of the time to generate efficient code for.

The x86, 68k and VAX were all vastly easier for the assembly language programmer than their predecessors the 8080, 6800, and PDP-11 (or PDP-10). They also were better for compilers, though people still didn't trust them.

The RISC people came along and said "If you simplify the hardware in *this* way then you can build faster machines cheaper, compilers actually have an easier time making optimal code, and everyone will be using high level languages in future anyway".

I remember the time when RISC processors were regarded as being next to impossible (certainly impractical) to program in assembly language!

A lot of that was because you had to calculate instruction latencies yourself and put dependent instructions far enough away that the result of the previous instruction was already available -- and not doing it meant not just that your program was not as efficient as possible but that it didn't work at all! Fortunately, that stage didn't last long, for two reasons: 1) your next generation CPU would have different latencies (sometimes longer as pipeline lengths increased), meaning old binaries would not work, and 2) as CPUs increased in MHz faster than memory did caches were introduced and then you couldn't predict whether a load would take 2 cycles or 10 and the same code had to be able to cope with 10 but run faster when you got a cache hit.
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #197 on: December 18, 2018, 04:35:16 am »
The $1M question is. How is my tweaking is any worse than yours?

That's easy. I'm taking code provided by someone else without any reference to a specific processor and then using default compiler settings (adding only -O, and -fomit-frame-pointer for the m68k as it's the only one that generated a frame otherwise) and seeing how it works out.

You on the other hand worked backwards from a processor to make code that suited it.

If westfw had provided the definitions for the structure he was accessing then I would have used that, as is. But he didn't so I had to come up with something in order to have compilable code.
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3248
  • Country: ca
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #198 on: December 18, 2018, 02:31:05 pm »
You on the other hand worked backwards from a processor to make code that suited it.

Haven't you?

Isn't this the way it should be. When you compile for a CPU you select the settings which maximize performance for this particular CPU instead of using settings which produce the bloat. As, by your own admission, you did for Motorola.

If you haven't done this for RISC-V, why don't you tweak it so that it produces better code? Go ahead, try to beat my 14 bytes, or even get remotely close.
 

Online brucehoultTopic starter

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: nz
Re: RISC-V assembly language programming tutorial on YouTube
« Reply #199 on: December 18, 2018, 03:11:04 pm »
You on the other hand worked backwards from a processor to make code that suited it.

Haven't you?

No.

Quote
Isn't this the way it should be. When you compile for a CPU you select the settings which maximize performance for this particular CPU instead of using settings which produce the bloat. As, by your own admission, you did for Motorola.

If you haven't done this for RISC-V, why don't you tweak it so that it produces better code? Go ahead, try to beat my 14 bytes, or even get remotely close.

Not interested in winning some dick size competition. If RISC-V ends up in the middle of the pack and competitive on measures such as code size or number of instructions just by compiling straightforward C code in a wide variety of situations with no special effort then I'm perfectly content. Other factors are then more important.

Everyone is going to "win" at some comparison. x86 can OR a constant with a memory location in a single instruction. Cool. So can dsPIC33. Awsome. That has approximately zero chance of being the deciding factor on which processor is used by anyone.

You didn't change the compiler settings. You changed the semantics of the code -- you changed what problem is being solved.
« Last Edit: December 18, 2018, 03:33:22 pm by brucehoult »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf