Most I2C sensors decouple the actual sampling from reading it out, anyway - either by implementing a FIFO which can hold many samples, or at very least buffering one sample (so fixed FIFO length of 1). This means that exact timing is non-critical as long as you read out the sample before the next one arrives*, which is pretty easy if sample rate is low.
*) if that next sample is relevant. If you just want data at say 10Hz and sensor samples internally at 100Hz, it's usually not a problem to let the data register overrun and lose 90% of the samples, although aliasing might be an issue if input values are changing faster than your final sample rate.
Same is true for sampling analog signals - triggering ADC sampling can be done accurately and jitter-free e.g. by a timer, so that readout timing is irrelevant (as long as it's before the next sample).
DMA, if available, extends this even further, so that you don't need to process between every two samples, but you can collect many samples in memory automagically for later batch processing. Although, even if you could, it's another question if you should; on microcontrollers, timing is usually easy anyway, so if you just guarantee doing all the work between two samples, then job done, no need to think about how to process them in batch. Again, like in real life: if you work at a box factory assembling cardboard boxes that come in on a conveyor, it's so much easier if you are quick and dedicated enough to finish each before the next one arrives. Failing that, you can collect them, but that brings new management tasks and requires area to store half-finished boxes; and eventually the conveyor needs to stop so that you can finish your work. And microcontrollers do not need coffee breaks, so simple "do it fully once it arrives" works very well.
But pure datalogger is actually one of the examples where DMA and batch processing probably does make sense, and life easier - significantly decoupling sample generation from storage. This is useful because on datalogger, latency usually is irrelevant - data is looked at hours, not nanoseconds, later. DMA into memory allows leeway for stuff like SD card / filesystem latency / timing uncertainties.
Lacking DMA, you basically do the same by reading a single value in an interrupt handler, writing it in a memory buffer. At trivially slow data rates, this isn't a problem.
And RTOS solves none of this. If you use RTOS, you have exact same choices to make: whether to use interrupts for data collection, or configure DMA for the same job and then manage the DMA in interrupts.
With an OS, you have a third option of creating abstract concept of "threads" that all busy loop looking at the inputs and let the scheduler to swap between these threads, but it's highly inefficient way to do it, and probably not even useful from code clarity viewpoint. I suggest event-driven programming paradigm, and it maps pretty well on hardware interrupts, especially on modern microcontrollers that implement pre-emptive (meaning: interrupts can interrupt other interrupts in priority order) interrupt controllers on hardware. If you choose this paradigm, then you end up using very few of RTOS features. Maybe some FIFOs / locking primitives offered by OS. Which you can do without the OS pretty easily.