Hello everyone! Finally updates...
I have redesigned the hardware architecture. Previously we had a single DMA and a huge FSM, that was processing the whole data transfer.
Now we have 3 separate modules, each one with it's own DMA:
1. VDMA - universal video outputing core. We have about 6 universal channels to stream video data to multiple consumers. For now only two channels are activated, the LCD and the OSD (on-scree-display for LCD). Remaining channels will be used for HDMI, AV and USB output.
2. DIP (digital image processing). This module includes all image processing cores, i.e. averaging, NUC (non-uniformity correction), BPR (bad pixel replacement), Histogram Equalization + AGC (automatic gain control).
3. Sensor module. This module is used only to control current sensor, i.e. feed it with bias data, commands and grab the video stream.
Why doing this? Because now the hardware design became more scalable. Now there is no need for massive HDL rework to support new sensors, we just have to replace the sensor capturing module with a new one. DIP became more universal too. Yes, it still depends on knowing the sensor active resolution, but I have an idea how to remove this dependency.