Ok, completely blue skys thinking here.
Data bits are captured on falling edge. of the bottom signal (clock).
First image gives 00000001 10010011 10000011 1, last image gives 00000001 11000001 11010101 1
The last seems to be be a stop bit or something.
Convert the first 24 bits into a 24 bit number, most significant bit first.
First image 0x01 9383
Last image 0x01 C1D5
Assuming the scale reads from 0 to 5000g in 0.1g steps it needs 16 bits of precision. The first byte doesn't seem to change. how about assuming that the first byte is an address word, that should give the middle bytes as the readings. That makes sense as most scales I've seen give a raw and net reading.
First image 0x9383 = 37763 decimal
Last image 0xC1D5 = 49621 decimal
Were the readings 3,776.3g and 4,962.1g?
If so, success!
If not, back to the drawing board for me...