Author Topic: ASM programming is FASCINATING!  (Read 8969 times)

0 Members and 3 Guests are viewing this topic.

Offline etiTopic starter

  • Super Contributor
  • ***
  • !
  • Posts: 1801
  • Country: gb
  • MOD: a.k.a Unlokia, glossywhite, iamwhoiam etc
ASM programming is FASCINATING!
« on: July 27, 2020, 02:54:44 am »
Okay, so my mind works best at the lowest possible (humanly parseable) level, with regard to machines. I find ASM particular fascinating, although I do also use bash a great deal and enjoy that, but ASM and knowing how the internal machinations of digital machines work, REALLY turns on lightbulbs in my brain.

I'm a lonnnnnnng way off knowing even 5% of what I need to know, but am gradually getting used to PIC ASM, and am now learning by messing around with an Altair 8800 simulator and loading registers by hand, and also referring to as much ASM as I can, no matter what the arch.

I've been looking here also, at 6502: http://www.obelisk.me.uk/6502/reference.html

If anyone would like to comment or advise how best to get a true, firm grasp on ASM, I'm open to suggestions.

Thanks guys and girls.
 

Offline Brumby

  • Supporter
  • ****
  • Posts: 12383
  • Country: au
Re: ASM programming is FASCINATING!
« Reply #1 on: July 27, 2020, 03:19:26 am »
First, I had to get a handle on what ASM programming was.  I took a guess and found I was right - but TLA's have come far further along than seems necessary (IMO).

My first foray into programming was writing in IBM assember and one of my favourite books was the S/360 Principles of Operations manual.  I did 6 years writing applications software in assembler for one company and then changed jobs to another for my assembler skills.

Yes, it is good fun when you pick up a dump, locate the registers then start reading the instructions directly off the hex.  Debugging can be fun, though, especially when the module you're checking is 7MB ... on a 15x11 printout a couple of inches thick.


My recommendation is to start out finding a development platform (which I'll leave others to suggest) and then try a simple example program and/or get a program that actually does something and change some things to see if you can make it do something different, that is according to your intention.

You will find, however, that there are many routines that you will use over and over, so re-writing them every time will wear you out very quickly.  Having blocks of code you drop in can ease this, but mostly there will be precompiled modules you will just call (DLLs for example).

Also, depending on your development environment, the ability for you to create garbage code is greatly increased.  Having a strict discipline in how you do things will become extremely important.  One example is spaghetti code - all too easy to do in assembler, but great for getting yourself into trouble if you're not careful.  Learn to do structured programming.  It will save you when programs get complex.
 

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4952
  • Country: gb
Re: ASM programming is FASCINATING!
« Reply #2 on: July 27, 2020, 03:25:42 am »
Turning your question upside down, and inside out.

What exactly do you want your Assembly Language programs (ASM), to do exactly ?
Perhaps in the future, when you know how to do ASM programming well.

Answering that, will lead to potential recommendations of the best processor(s) to go for and/or emulate/simulate.

Then you can go typing away, creating/learning your assembly language, and run it.

Unlike high level languages, it is less easy to move between architectures, so the sooner you know which processor you want to spend significant amounts of time with, the better.
But you can move to different architectures, reasonably easily, but any code you have already written, will no longer work, without fairly major, conversion/rewriting.
Unless, you just want to learn tiny bits of lots of different architectures, and never actual write any significantly long, working programs as such.

Your main choices would be 8 bit. Relatively easy, straight forward, and suitable for small/simple/easy programs, to do simple tasks. Such as counting from 1 to 100, on a seven segment display.

16/32 bit, such as the Motorola 68000 series (relatively obsolete these days, but practicable to program by hand, unlike many later processors, which can be done, but are considerably harder). Better suited to bigger, more complicated programs (although 8 bit could in theory do just about anything), a fair bit faster (often), potentially easier for more complicated tasks.

Desktop PC x86, hardware floating point available, can do very big and complicated programs, you probably already have access to these.

At a guess, maybe Arduino is a good choice. You can get them (clones), relatively cheaply (e.g. £1.50), program them from a PC using USB and freely available IDE's. They are 8 bit (other versions are available as well), tons of information about them on the internet, and they have a fairly straight forward, 8 bit architecture.
There are different versions, which I won't list here, such as the Arduino Uno.
They can easily interface to hardware (pre-built I/O stuff, is cheaply and readily available, such as displays, keyswitch boards, etc), and have a similar instruction set (8 bit), to some of the older generation, 8 bit processors.

You can even mix C code with some ASM, and/or use pre-existing libraries.
« Last Edit: July 27, 2020, 03:37:44 am by MK14 »
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: ASM programming is FASCINATING!
« Reply #3 on: July 27, 2020, 04:03:51 am »
I've gone all the way around the circle, I think; early on I started with QBASIC (because it was accessible, and easy), and some rudimentary x86 ASM (because MS-DOS has DEBUG, and I felt a cryptic mystique around using it).  Later, I moved on to Java and C, but also learned a few other instruction sets (AVR and Z80).  Most recently, I've finally done a complete project, in GCC, incorporating one ASM module (for AVR).  So I've learned GAS' particular syntax, avr-gcc's ABI, etc.

I don't recommend x86 for starting out, but it's certainly accessible, and available... ARM too, especially these days.  Learning on a classic platform, like recommended above, is probably a good idea.  Take some months to learn it, write some algorithms; take a break, come back and do some more.  Learn a new instruction set, and so on.  Repeat until you're comfortable with how most traditional ISAs work.

I... well, what I'd have to say is, I'd like to be able to write C in terms of, these statements compile to those instructions, and so on.  But there are too many layers of abstraction, with too ineffective optimization, for that to be the case, at least on the last (AVR) project.  This is an effective way to approach HDL I think, i.e. that certain combinations of statements (processes) produce certain kinds of registers, gates and so on.  But a compiler like GCC at least doesn't quite allow that sort of writing.  But I can say, I'm fluent in both formats, and would gladly write that way if I could.

(The danger of that type of writing, is the high level code starts to take on structures that bubble up from the underlying hardware; when porting to different hardware, it may require structural changes to the code itself, if not semantically -- as long as the C code is valid, not abusing edge cases, built-ins and such, it should compile correctly on any platform -- but for best optimization results, this can be a concern.)

Heh... I guess what I'm really trying to say is, not so much anything about an ISA, just a whine about how GCC 9 is so bad on AVR, I can't "write for the platform" even if I try. :-DD  The upside to that is -- since I'm not even able to write for the platform (in a usefully optimized way), the code is forced to remain general and should compile, if not well, then at least cromulently, for other platforms. :P

But anyway -- even if you can't "write for the platform" (and, really, you shouldn't strive for it, exactly because of portability), you can always review the assembled output from the compiler, and see how it's turning statements into instructions, and see which expressions and statements have more overhead than others, etc.  It's a good exercise, and being comfortable with the platform's ISA makes debugging so much easier.

Tim
« Last Edit: July 27, 2020, 04:08:03 am by T3sl4co1l »
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4137
  • Country: gb
  • Doing electronics since the 1960s...
Re: ASM programming is FASCINATING!
« Reply #4 on: July 27, 2020, 01:15:53 pm »
The original Z80 etc CPUs were programmed almost entirely in assembler, partly because memory was short and partly because compilers generated horrible code.

I wrote megabytesof code for these chips in the 1980s and 1990s.

You need to be pretty good - much better than today's "PHP hackers" - and disciplined. But your general productivity won't be anywhere near as good as that of a good C programmer doing the same job.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline BravoV

  • Super Contributor
  • ***
  • Posts: 7549
  • Country: 00
  • +++ ATH1
Re: ASM programming is FASCINATING!
« Reply #5 on: July 27, 2020, 01:46:51 pm »
Its a lost art imho, in the old days was called -> Demoscene or called 4K code challenge, another example -> This 4-kilobyte demo squeezes a universe of fractals into the size of a Word document

This scene and the music (like crap I know), is generated by just 1K bytes code  :o
« Last Edit: July 27, 2020, 02:06:52 pm by BravoV »
 
The following users thanked this post: Circlotron, MK14, shakalnokturn

Offline Benta

  • Super Contributor
  • ***
  • Posts: 6261
  • Country: de
Re: ASM programming is FASCINATING!
« Reply #6 on: July 27, 2020, 06:00:58 pm »
Still lots of assembler code being written today, but mainly for controlling issues very close to the hardware.
Typically CPU/peripherals initialization, boot loaders, exception handlers and of course device drivers (at least at the lowest level).

The most developed and mature architecture for assembler programming is imho the 68K, but today it's a dead end. Probably ARM is the way to go today. But now we're moving back to the "architecture wars" of the 1980/90s.

:)




« Last Edit: July 27, 2020, 06:05:11 pm by Benta »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28059
  • Country: nl
    • NCT Developments
Re: ASM programming is FASCINATING!
« Reply #7 on: July 27, 2020, 07:44:24 pm »
I recommend not to start with MIPS; this has a 1 delay slot in the pipeline which executes the instruction after a jump instruction. Other than that each architecture has its pros and cons.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline NivagSwerdna

  • Super Contributor
  • ***
  • Posts: 2507
  • Country: gb
Re: ASM programming is FASCINATING!
« Reply #8 on: July 27, 2020, 07:50:33 pm »
http://www.retrotechnology.com/memship/memship.html

When you get to the bit level (the elf is loaded using the switches) then you will get a good understanding!

ASM to too high level... you need to know how the bits map to the opcodes.

 :)

PS
1802 has a nice opcode layout
 

Offline Benta

  • Super Contributor
  • ***
  • Posts: 6261
  • Country: de
Re: ASM programming is FASCINATING!
« Reply #9 on: July 27, 2020, 07:57:58 pm »
http://www.retrotechnology.com/memship/memship.html

When you get to the bit level (the elf is loaded using the switches) then you will get a good understanding!

ASM to too high level... you need to know how the bits map to the opcodes.

 :)

PS
1802 has a nice opcode layout

Nah! If you want to go really deep, the Motorola MC14500 is the thing.  :)
http://www.ganssle.com/articles/quirkychips.html

 

Offline Benta

  • Super Contributor
  • ***
  • Posts: 6261
  • Country: de
Re: ASM programming is FASCINATING!
« Reply #10 on: July 27, 2020, 08:01:19 pm »
I recommend not to start with MIPS; this has a 1 delay slot in the pipeline which executes the instruction after a jump instruction. Other than that each architecture has its pros and cons.

To paraphrase Frank Zappa: "MIPS is not dead, it just smells funny."

« Last Edit: July 27, 2020, 09:11:44 pm by Benta »
 

Offline KL27x

  • Super Contributor
  • ***
  • Posts: 4108
  • Country: us
Re: ASM programming is FASCINATING!
« Reply #11 on: July 27, 2020, 08:30:41 pm »
If you want some guidance on basic practices, gotchas, and the finer points of MPLAB IDE as pertains to aiding in assembly programming, you might be interested in David Gooligum's PIC assembly tutorials.
 

Offline NivagSwerdna

  • Super Contributor
  • ***
  • Posts: 2507
  • Country: gb
Re: ASM programming is FASCINATING!
« Reply #12 on: July 28, 2020, 10:55:07 am »
I should have mentioned... there is a really nice set of videos on YT from Robert Paz (Sadly RIP now)

https://www.youtube.com/playlist?list=PLifLftIJFUm-gv-OQr_7WbsKdn3i6zdZD

These, although the IDE part might be slightly out of date, show Assembly on the Arduino Uno / AVR.... so if you want to explore that architecture it might be interesting.

It all depends... having breakpoints is always nice... that might affect your choice of platform.

PS
I haven't commercially written any assembler for decades... C compilers are too good!
 
The following users thanked this post: MK14

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14842
  • Country: de
Re: ASM programming is FASCINATING!
« Reply #13 on: July 28, 2020, 12:09:40 pm »
ASM programming is still sometimes done with µCs. I does offer the advantage of a well defined run time. So those old days style waiting loops get accurate.

The AVRs can be a good starting point. The AVR studio dev. environment has a relatively nice simulator. So one can do the first tries even without real hardware. It also helps with debugging.
 

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4952
  • Country: gb
Re: ASM programming is FASCINATING!
« Reply #14 on: July 28, 2020, 12:32:28 pm »
I does offer the advantage of a well defined run time. So those old days style waiting loops get accurate.

With the rather old processors/architectures, your are right. The odd modern one, here and there can be an exception as well (e.g. XCore).

But, unfortunately, with processors beyond a certain level of complexity (superscaler, data/instruction caches, branch Prediction Speedup Things, some pipelines (especially long ones),  https://en.wikipedia.org/wiki/Out-of-order_execution  , etc), they become no longer fully/really predictable, as regards 100% precise timing, of small program structures. Even if it is written in pure assembly language.
E.g. It *might* be in the cache and/or branch speed up mechanism and/or be able to do multiple-instructions this machine cycle (depending on various factors, including clashing register usage etc).
Plus complications because of pipelines, I.e. it is almost instantaneous, because the pipeline allows it, or the pipeline has to be flushed, etc etc.
Disclaimer: The above paragraphs were written quickly and may have small (or bigger) technical inaccuracies. If you are really interested in this stuff, there is some good/interesting stuff (books and good internet stuff) out there.
« Last Edit: July 28, 2020, 01:19:55 pm by MK14 »
 
The following users thanked this post: newbrain

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20732
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: ASM programming is FASCINATING!
« Reply #15 on: July 28, 2020, 02:53:52 pm »
ASM programming is still sometimes done with µCs. I does offer the advantage of a well defined run time. So those old days style waiting loops get accurate.

Only in very simple processors that don't have any cache, don't have interrupts, and are strictly in-order.

Even then it is difficult to guarantee timing in all but the simplest systems or  the most lightly loaded systems. If you doubt that, specify which scheduling algorithm you are using, and find a way toconvince other people that your implementation meets the requirements. That is surprisingly difficult.

There is, as MK14 notes, one modern alternative that guarantees hard realtime performance by design: the xCORE processors running xC. They use parallism to avoid the need for interrupts, use hyperthreading techniques to avoid the need for cache, have FPGA-like i/o structures, and the IDE inspects the (optimised) code to determine the exact timing between here and there. No other processor comes close.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: MK14

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15413
  • Country: fr
Re: ASM programming is FASCINATING!
« Reply #16 on: July 28, 2020, 03:07:31 pm »
ASM programming is still sometimes done with µCs. I does offer the advantage of a well defined run time. So those old days style waiting loops get accurate.

Only in very simple processors that don't have any cache, don't have interrupts, and are strictly in-order.

Even then it is difficult to guarantee timing in all but the simplest systems or  the most lightly loaded systems. If you doubt that, specify which scheduling algorithm you are using, and find a way toconvince other people that your implementation meets the requirements. That is surprisingly difficult.

Yep.
The "last" processors I did that on - already over 15 years ago now - were PIC16F/18F CPUs. Cycles per instruction were fully documented and fixed for all instructions, and you could indeed predict execution time with 1 cycle accuracy. If you needed *short* and accurate delays, like just a few cycles, that was pretty much the only way.
I did that with even older processors, such as the Z80 - with interrupts disabled.

There is, as MK14 notes, one modern alternative that guarantees hard realtime performance by design: the xCORE processors running xC. They use parallism to avoid the need for interrupts, use hyperthreading techniques to avoid the need for cache, have FPGA-like i/o structures, and the IDE inspects the (optimised) code to determine the exact timing between here and there. No other processor comes close.

Yeah. Definitely a completely different architecture.

Getting back to the topic, programming in assembly these days on modern processors can be certainly fun and somewhat fascinating, you can indeed get more performance this way - if you're very good, otherwise most compilers will beat you to it - but don't expect to be able to accurately predict execution time. Or stick to the very simple processors. As I mentioned, that's something you can still do with PIC16F/18F MCUs for instance, which are still available.

As to getting a "good grasp" on ASM? I would probably start as we used to back in the days - first study the architecture of common (and simple) processors, get to know everything they are able to execute (arithmetic ops, logic ops, branches, memory move, etc.) and then learn the corresponding assembly. At this point, it'll be easier to understand what every assembly intruction does and WHY it has been implemented. Then of course, look at a lot of code examples, and write your own. If you're interested in the 6502, I think you'll find thousands of resources for that.
« Last Edit: July 28, 2020, 03:15:17 pm by SiliconWizard »
 
The following users thanked this post: MK14

Offline NivagSwerdna

  • Super Contributor
  • ***
  • Posts: 2507
  • Country: gb
Re: ASM programming is FASCINATING!
« Reply #17 on: July 28, 2020, 05:33:16 pm »
Of course there is also the joy of reverse engineering using Ida Pro or Ghidra or the like.

If you are interested in computer security etc... then Buffer Overflow attacks are fun and there are plenty of tutorials out there..

e.g.

If that's your thing then Linux, gcc and gdb are your friends.

Have fun!


 

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4952
  • Country: gb
Re: ASM programming is FASCINATING!
« Reply #18 on: July 28, 2020, 06:57:15 pm »
I does offer the advantage of a well defined run time. So those old days style waiting loops get accurate.

With the rather old processors/architectures, your are right. The odd modern one, here and there can be an exception as well (e.g. XCore).

Disclaimer: The above paragraphs were written quickly and may have small (or bigger) technical inaccuracies. If you are really interested in this stuff, there is some good/interesting stuff (books and good internet stuff) out there.

No one else seems to have so far, so I will 'attack' my own post.

Actually, no, even the old processors, were not necessarily, cycle by cycle accurate.

Examples:
WAIT state lines (might be called something else, depends on cpu, e.g. ready), which inserts wait states, to help with slower memory, I/O and sometimes other things. Some schemes are somewhat non-deterministic (e.g. a video card, which may or may not, be accessing the video memory, at the same instant in time).

If the looping software is waiting for a bit or value, in an internal or external peripheral, to occur or change, the timing can be asynchronous and/or random. Because (e.g. the A to D converter), the precise timing may depend on how long the analogue and/or digital circuitry, takes (potentially asynchronous to the master cpu clock).

Some instructions, have a variable number of cycles, depending on the exact values, and don't seem to bother to define what they are. E.g. Some integer divide instructions. Might say, takes from 2 to 15 cycles, depending on values involved. It could be deterministic, if you researched how the cycle count varies, but most people probably won't do that. There could be documentation somewhere that defines it (e.g. each 1 in specified register, adds a cycle to the time plus a minimum of 2 cycles plus address mode penalty cycle count).

Not an exhaustive list. Also not the best of examples, because they are sort of deterministic, it is just the exact times in practice will vary, depending on somewhat defined things (such as A to D response time variation) etc.
« Last Edit: July 28, 2020, 07:07:08 pm by MK14 »
 

Online chris_leyson

  • Super Contributor
  • ***
  • Posts: 1549
  • Country: wales
Re: ASM programming is FASCINATING!
« Reply #19 on: July 28, 2020, 07:26:02 pm »
I think the most fun I've ever had writing assembler was for Motorola DSP chips. You always had to keep an eye on how your data was moving, which busses or registers were free to move the data and then sometimes tweak the order of operations so as not to get an arithmetic, transfer or status stall whereupon the assembler would add a NOP instruction. I was good fun interleaving mathematical operations and data moves in order to keep the ALU as busy as possible. NXP still do the 56k family.
« Last Edit: July 28, 2020, 07:39:06 pm by chris_leyson »
 
The following users thanked this post: MK14

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20732
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: ASM programming is FASCINATING!
« Reply #20 on: July 28, 2020, 08:22:29 pm »
I does offer the advantage of a well defined run time. So those old days style waiting loops get accurate.

With the rather old processors/architectures, your are right. The odd modern one, here and there can be an exception as well (e.g. XCore).

Disclaimer: The above paragraphs were written quickly and may have small (or bigger) technical inaccuracies. If you are really interested in this stuff, there is some good/interesting stuff (books and good internet stuff) out there.

No one else seems to have so far, so I will 'attack' my own post.

Actually, no, even the old processors, were not necessarily, cycle by cycle accurate.

Examples:
WAIT state lines (might be called something else, depends on cpu, e.g. ready), which inserts wait states, to help with slower memory, I/O and sometimes other things. Some schemes are somewhat non-deterministic (e.g. a video card, which may or may not, be accessing the video memory, at the same instant in time).

If the looping software is waiting for a bit or value, in an internal or external peripheral, to occur or change, the timing can be asynchronous and/or random. Because (e.g. the A to D converter), the precise timing may depend on how long the analogue and/or digital circuitry, takes (potentially asynchronous to the master cpu clock).

Some instructions, have a variable number of cycles, depending on the exact values, and don't seem to bother to define what they are. E.g. Some integer divide instructions. Might say, takes from 2 to 15 cycles, depending on values involved. It could be deterministic, if you researched how the cycle count varies, but most people probably won't do that. There could be documentation somewhere that defines it (e.g. each 1 in specified register, adds a cycle to the time plus a minimum of 2 cycles plus address mode penalty cycle count).

Not an exhaustive list. Also not the best of examples, because they are sort of deterministic, it is just the exact times in practice will vary, depending on somewhat defined things (such as A to D response time variation) etc.

All true, but most are "outside" the computer, and so "don't count". The "difficult" cases involve instructions like that division instruction, and non-fixed loops in general.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: MK14

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4952
  • Country: gb
Re: ASM programming is FASCINATING!
« Reply #21 on: July 28, 2020, 09:01:19 pm »
All true, but most are "outside" the computer, and so "don't count". The "difficult" cases involve instructions like that division instruction, and non-fixed loops in general.

I agree.
Assuming no wait states (fixed duration defined time ones, can be calculated as extra cycles), interrupts and anything else that can affect the timings, is active. The cycle times (on certain cpus, only, e.g. old ones), is predictable/deterministic.
My bringing in less deterministic, external (sometimes internal) events, is wrong. Until you start worrying about the overall system, in a complex real time system, which is not what we are discussing.
« Last Edit: July 28, 2020, 09:03:31 pm by MK14 »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20732
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: ASM programming is FASCINATING!
« Reply #22 on: July 28, 2020, 09:23:10 pm »
All true, but most are "outside" the computer, and so "don't count". The "difficult" cases involve instructions like that division instruction, and non-fixed loops in general.

I agree.
Assuming no wait states (fixed duration defined time ones, can be calculated as extra cycles), interrupts and anything else that can affect the timings, is active. The cycle times (on certain cpus, only, e.g. old ones), is predictable/deterministic.
My bringing in less deterministic, external (sometimes internal) events, is wrong. Until you start worrying about the overall system, in a complex real time system, which is not what we are discussing.

It isn't "wrong" per se, and is important, but it is arguably a different discussion.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: MK14

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: ASM programming is FASCINATING!
« Reply #23 on: July 28, 2020, 09:24:18 pm »
The lesson on timing is, of course: if you have enough CPU power to do it, and deterministic timings, then you can hard code it; if not, then buffer it, and make sure you have enough CPU power to get through the worst-case paths to refreshing that buffer in time.  AVR (most of them) don't have DMA, but most ARMs do (or above the entry level Cortex M0 tier, say).  Those ARMs usually also have caches -- though they may be documented cryptically, e.g. a "Flash memory accelerator"...

This is what PCs do; although, that PCs can deliver multimedia at all, is still something of a miracle on top of that.  Most OSs don't guarantee program execution within, really, any period of time.  The CPU is one thing, but the OS is a huge pile of APIs, caches and priority queues, and the granularity is not very impressive.  They just happen to work most of the time, say, jumping back into your program every millisecond or so.

Indeed, modern application processors are so fast that you might not care at all, about their indeterministic execution time; an AVR or Cortex-M0 can execute one or two instructions, in the time it takes the big CPU to execute a hundred -- and those instructions are vastly more powerful, operating on more data (including SIMD extensions) in ever richer ways.  In that fraction of a microsecond, the entire computation might be complete, whereas the deterministic CPUs are just sitting down to work.  Not to mention if multiple cores are employed (not that their outputs will be combined until much later, due to inter-CPU communication and cache coherency).  This is partly why a , if imperfectly (but you need to use a kernel mode driver to get around the OS's context switching).

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: MK14

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4952
  • Country: gb
Re: ASM programming is FASCINATING!
« Reply #24 on: July 28, 2020, 11:41:41 pm »
The lesson on timing is, of course: if you have enough CPU power to do it, and deterministic timings, then you can hard code it; if not, then buffer it, and make sure you have enough CPU power to get through the worst-case paths to refreshing that buffer in time. 

In practice, there are various techniques, for mostly getting back, near (enough) deterministic capabilities. From powerful modern cpus, such as the latest PCs. You described one way.
For example, in game programming. It is usual to use the fact that you can easily get the actual exact elapsed time, and use that, when you are calculating time dependent things.
So, although the actual time jitters about, like crazy. The calculations, move things in proportion, to how much time has elapsed, since the last event/redraw/movement, of the game object that you are currently processing.

I began to be impressed with your video clip, putting Doom, onto relatively ancient, very low capability hardware. Until I heard them say about putting a Raspberry PI in it. Not to be confused with, totally and utterly cheating, I guess. It is still a significant challenge. To interface it, into the cartridge, and get the low resolution, limited number of colours. To act, like a much more modern, somewhat high resolution, many different colour, display.
So all things considered, it wasn't too bad!
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf