Lossless or lossy.
There are the obvious choices like h264 and AV1, but then there are newer lossless encoders like ffv1 which can be quite efficient. Frustratingly, there aren't often efficient implementations at the logic gate level - you need to get your frames into userspace on an OS to use all those libraries, or write your own FPGA or DSP implementation of the encoder, or use some built-in peripheral to do it.
At the ultra low level, there are obviously simple algorithms like run-length encoding of pixels from a stream, but I'm not sure they'd help much in thermal images where adjacent pixels are often just slightly different.
Has anyone found benefit in squeezing unused bits from an image? ex: If every 16-bit pixel in a frame is anded together, and the result is 0b0001111111111000, drop all the bits from each pixel that are higher than 12 and lower than 3, concatenate each frame into a HxWx(16-6) binary array, and put a header before each frame showing that limit.