Use something standard and well known ... deflate, lzo, lz4, whatever
The compression is less important, it's how you arrange the data that's important.
Just grouping the bytes would help the compressor significantly. (ex take 16K readings and do 16 k bytes of first byte from every reading, then 16k of second byte from the next reading and so on)
Just did an experiment with your data, and with first 16K records, the original 49152 byte file compressed using 7z lzma to 44266 bytes , and the one rearranged was reduced to 32260 bytes (67% of original size). Deflate does around 33500 bytes.
You can see for yourself with the attached files that contain your data, and the script I've used to create them (in php)
You can go further... if you know you have long runs of negative numbers, and then long runs of positive numbers ... you could just have a header to every chunk of records where you say N total samples, 100 negative, 500 positive, 1 negative, 1000 positive ... you may use 100 bytes or as many as needed, but you save 16k records x 1 sign bit or 2KB.
In the attached example, there's 128 changes of sign in the first 16k samples if my code is correct, and most runs are up to 129 samples of same sign, so if you use 2 bytes for each group then that's 256 bytes header but you're saving 16 kbps or 2048 bytes... so overall 1792 bytes reduction.
You could have other tricks like storing only difference between samples ... have a "keyframe sample" which is stored in 3 bytes, then have 16 or as many as you want that store the difference between samples and use 2 or 3 bytes for the next values ( first bit 0 = 2 bytes, first bit 1 = 3 bytes) dynamic size for each value (for example your number could be multiples of 4 bits (minimum 12 bits, maximum 28 bits) ... use 2 bits to signify length, one bit for sign, rest of the bits for difference