(I don't care about convenience, really; I only care if/how it affects the efficiency [resource use] of implementations, and whether one choice allows more alternative implementations than the other.)
The only case I can think of where the byte order actually matters (as in, makes a difference in complexity of implementation), is when accessing arbitrary precision number limbs, or bit strings.
In little-endian byte order, you can use any unsigned integer type to access the ith bit in the string. That is, if map is sufficiently aligned and large enough,
unsigned int get_bit64(const uint64_t *map, const size_t bit)
{
return !!(map[bit/64] & ((uint64_t)1 << (bit & 63)));
}
unsigned int get_bit8(const uint8_t *map, const size_t bit)
{
return !!(map[bit / 8] & (1 << (bit & 7)));
}
then you always have get_bit64(map, i) == get_bit8(map, i). (Ignore any typos in the above code, if you find one.) Not so with big-endian byte order, where you must use a specific word size to access the bit map. Granted, it only matters in some rather odd cases, like when different operations wish to access the binary data in different-sized chunks.
Other than that, the byte order really does not seem to affect me as a programmer much. The fact that there is more than one byte order in use, does, but I guess I'm used to that, having dealt with so many binary data formats with differing byte orders.