Author Topic: SLIC - Simple Lossless Imaging Codec  (Read 996 times)

0 Members and 1 Guest are viewing this topic.

Offline DiTBhoTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
SLIC - Simple Lossless Imaging Codec
« on: February 24, 2023, 01:35:48 pm »
Code on github.
Impressions?  :o :o :o
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15439
  • Country: fr
Re: SLIC - Simple Lossless Imaging Codec
« Reply #1 on: February 25, 2023, 09:44:00 pm »
Not much so far. Just had a quick look. It says it's inspired by QOI, which OTOH I have tested and even re-implemented.

I can't tell what the key differences are with QOI. But I can tell this takes much more code to implement than QOI, so much for being adapted to embedded.
Haven't seen what makes it possibly better, so if you have / or someone else, just chime in. Don't have time to figure it out.

(I got confused at first about where the actual code was, until I found out it was in some .inl file. And it's just pure C. Nice.)

 
The following users thanked this post: DiTBho

Offline DiTBhoTopic starter

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: SLIC - Simple Lossless Imaging Codec
« Reply #2 on: February 26, 2023, 12:07:09 am »
I need something light for a custom vnc-like engine to be implemented in a PPC40x@133Mhz embedded node with only 6Mbyte/sec over tcp/ip.

it needs to move 1024x768x24 @ 30fps  :-//

(possibly mission impossible)
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15439
  • Country: fr
Re: SLIC - Simple Lossless Imaging Codec
« Reply #3 on: February 26, 2023, 02:59:17 am »
QOI might be fine for what you want to do.
You can add some form of fast compression to it. I did add LZ4 to QOI to get something pretty nice and fast. LZ4 might be even a bit too much for your needs, but QOI either on its own or followed by some faster general-purpose compression should do the trick.
 
The following users thanked this post: DiTBho

Offline mariush

  • Super Contributor
  • ***
  • Posts: 5143
  • Country: ro
  • .
Re: SLIC - Simple Lossless Imaging Codec
« Reply #4 on: February 26, 2023, 04:01:15 am »
Well, first thing you should consider is if you truly need 24 bit. Converting from RGB24 to YV12 would result in 6 bytes for every 4 pixels so you go down from 2.25 MB per frame to half  (4 pixels x 3 bytes per pixel => 4 bytes for individual luma + 2 bytes for shared chroma)

then you should probably consider tiling the frame in 16x16 pixels or 64x1/128x1/256x1 regions (whatever is faster) and compare with previous frame (you'd need 8 bits or 1 byte to tell the decoder if a 1024 pixel horizontal line is the same as previous frame or not, if you go with 128 pixel x 1 "tiles")
With small pieces like 64-128 pixels you can also easily count the number of colors and if it's lets say less than 16-32 colors, you could flip that segment to palette (and maybe cache a bunch of palettes to be reused until next keyframe) instead of 24 bit/yv12 and save more bytes, and if it's only one or two colors it may be a hint that it can be rle compressed.

Of course, every 30-60-90 frames you'd need a keyframe so that someone connecting to system would wait at most a few seconds until a keyframe comes.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: SLIC - Simple Lossless Imaging Codec
« Reply #5 on: February 26, 2023, 08:13:38 am »
I don't have any references or links unfortunately but I have read a few articles discussing old video codecs, maybe you can find them?  Contemporary work is relevant, for obvious reasons.

Jon Burton's video comes to mind,



but that's a far more limited system, and obviously even doing full-screen redraws on such a system is challenging enough, let alone video: significant compromises to quality were required.  A few PC games (in the 90s, maybe early 2000s just before MPEG acceleration became widespread) attempted video, some using the popular QuickTime and relatives for example (often limited by CD bandwidth), others using more obscure/bespoke codecs.  That seems relevant, at least; I don't know your exact specs but I'm guessing it's comparable to a Pentium of that era.  Maybe a lower end one since POWER is RISCy (fast but weak instructions vs. slow but powerful).

Without a doubt you MUST compromise quality.  Raw video is 70.7 MBps.  Matter of fact, do you even have a fat enough pipe to the graphics adapter?  At 32b x 133MHz (unless that has a wider FSB?), it's plausible, but it very much depends what bus, graphics, RAM (like, is it dual-ported?), and whatever else you have to do in the background...

Like, a contemporary PC (2MB SVGA, Pentium 66 or thereabouts) sure as hell ain't gonna do full resolution redraws at anywhere near real time frame rates.  Maybe your system is more tightly integrated (more graphics bandwidth).

Anyways, lossless will only ever get you 2-4x compression on media data, or somewhere around there.  You're asking for 11.8x, there's simply no bridge between those points.  But a modest degradation of quality will do.  HD video would do fine...but MPEG on CPU is piss.

Some basics ought to help.  Toss away bits that aren't visually used: consider mu-law coding; YUV transform and color downsampling; difference frames if you can (requires back buffer!) but in the worst case you still need some way to update the whole screen; use spacial awareness to your advantage, but be careful about the higher complexity of 2D data (IIRC the discussion of QOI included critique of PNG's row-aware algorithm?).  Blocking probably isn't going to gain you very much, except on very flat patches, which won't show up very often (but the encoder could set a variable threshold for what counts as "flat block", maintaining fixed bitrate while prioritizing more significant changes).  Well, maybe; blocking certainly helps deal with priority by area.  Maybe it can be applied fractally, which would in turn depend on expected frequency content of the image (you wouldn't make a fully general binary space partition, but merging together a few orders of magnitude could still afford higher resolution in priority areas, i.e. variable block size on a powers-of-2 grid).  None of these methods should impact bulk CPU performance too much (i.e. avoiding high complexity per pixel, cache misses).  And perhaps blocks are limited by cache size as well, give or take how much has to be reserved for instructions, and source and destination buffers.

May also be some transforms you can perform to precondition the image for better performance.  PNG does forward difference (among other things).  Maybe some periodicity tuning to improve LZ-esque methods?

Also, assuming you have ~unlimited time to encode (or at least a much more powerful system e.g. modern PC or FPGA, and the dev time to implement it), you could study the decoder in excruciating detail to figure out subtle optimizations; and given even more time, refine the decoder to remove less frequently used codings and tweak things back and forth further.  I mean, you'll need to do that initially anyway, but, just to say, the "long tail" of deep optimization...

Tim
« Last Edit: February 26, 2023, 08:21:51 am by T3sl4co1l »
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: DiTBho


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf