My solution involves two floating-point ops per digit read which may be time-costly on a 8-bit MCU. But at least there are fewer iteration, index, and temp vars to manage (and debug).
sizeof returns bytes allocated to the whole type/var x so dividing by the bytes used by the first element, x[0], returns number of elements. No need to create a #define for every array; can just drop in a one-time number at the array definition. It's portable, type independent, and no overhead as the value gets resolved at compile-time.