Hi Gents,
I'm working on a project where I need to overlay some info on a VGA video stream. And I'm struggling to get my head around the speed and memory constraints needed to do this. The design will have to cope with a 1080p video stream. According to
this link, the pixels are coming in at ~200Mhz.
Now this is where it all goes a bit pear shaped in my analysis paralysis brain, I have two idea's:
1:
FPGA: Hang a 2Mx32Bit memory off the FPGA, whilst sampling the incoming stream via a Analog Devices
ADV7604 ADC at 8bits. The problem is, the fastest SRAM I can find is 250Mhz (I can run up to a Spatan 6 with my license, so I don think DDR will be easily done) So getting the frame into RAM is easy enough, getting it out to the DAC's again seems impossible, as I dont think there is processing time left to get the image out of RAM again after modifying it.
2:
FPGA: No external RAM, take the samples in via the ADV7604, and keep internal count of what pixel we are at, then modify in real time as they are sampled and send out to the DAC again.
3:
FPGA: No external RAM, Hang the FPGA in parallel with the VGA signals, figure out the position of the pixel clock from the Hsync and Vsync pulses, and modify the pixels we want to modify live by messing with the voltages. Its dirty, but its easy.
I'm thinking 3 would be the easiest to get up an running, and with the least hardware needed, but I'm not well enough versed in FPGA limitations to know if the FPGA will be able to sample, modify and then write the pixels at 200Mhz+ and not drop pixels. I also doubt this would be possible with a uC, as the clock speeds are too high.
Am I missing anything that I should be thinking about? Is there another method to do this? Please discuss and help a brother, as my uC background is somewhat useless here.
Thanks!
JVR