Some brilliant ideas here, though I must confess understanding some is above my mental pay grade
I think those brilliant people for putting in such effort to help this really good community.
I think Nominal Animal's method is similar to storing a marker at the start of each 512 byte block, which is a byte count within that block. Sadly, 2 bytes would be needed for the byte count. But one could use special values above the 0-510 range e.g.
bit 15 = 1 = this block is empty
bit 14 = 1 = first data block,
any nonzero value in bits 0-8 < 510 = the data ends within this block
My task is not just to code this but also to implement an API which any pleb can use. I work on the principle that if I can understand something then everybody will have no problem at all, and if I can't then it will be a support nightmare
(this product will have some other people write code for it, later on)
And, thinking about the API "for a data logger", it comes back to implementing a auto-wear-levelling file system
Which I don't have and have no interest in ripping out FatFS because
- it works perfectly
- don't have the time
- the file system is also accessible via USB, as a removable logical block device, 512 byte sectors (hence the FLASH device choice, 512 and not 528) so the same files are visible to Windows etc (with some deliberate limitations in the
embedded API e.g. only 8.3 filenames, no subdirectory support...
Any wear-levelling FS would not be visible via USB, unless somehow emulated (complicated, because Windows dives straight in at sector level).
So, back to my 512kbyte area dedicated for data logging. What sort of API should be implemented?
- fast format
- slow format (for secure erase)
- append a data block, size x
- read a data block from offset x, max y bytes (obviously x needs to be < current end of data)
- truncate the data at x (poke the "end of data" marker in, before the previous end of data
- return total data size
I think supporting the above with the more cunning schemes would be quite complicated. It really lends itself to a "end of data" marker scheme. OTOH it would encourage a usage mode where people write just say 100k of data, then read it and do something with it (send it somewhere, write it out to a file in the FatFS filesystem, etc), then format. So the early part of the 512k would get more wear. But vastly less wear than opening a file in FatFS and appending to it, which thrashes the FAT area (like an SSD with WinXP, which always IME works for exactly 1 year, on a 24/7 machine, before you get "NTLDR corrupted"
).
If one wants to keep the wear totally even, one needs to do it differently, with no fixed "start of data" point. One needs both "start of data" and "end of data" points to be continually moving around the 512kbyte block. I need to decide whether this is worth the extra effort. One simple way would be for the "fast format" command to replace the "end of data" marker with a "start of data" marker, and then the next block to write would be written after that.
The job becomes trivially easy if the start of data marker is 0x00, the end of data marker is 0xFF and the data cannot contain either
Then you don't have to do string matching across block boundaries. I am sure there is a slick way but I don't know of one.
Yes indeed I could (can't on the current board) put in an SD card socket and have "unlimited" storage. All SD cards should support the basic SPI mode, if not the others (hence my other thread about whether licensing is needed for the faster modes; the actual algorithms seem to have escaped into the wild, but I can't run my SPI faster than 21mbps anyway, so the 5-10mbps possible with license-free SPI would be more than fine).