As I mentioned in one of the posts in a thread the Test Equipment section here, it's really easy to generate any waveform for this func-gen, because the file format it uses RAF is literally just raw waveform data (assuming you don't use the more complex version of the file format that has a header). Each sample is 14bits padded to the nearest byte (not bit-packed), and the values are unsigned, and the byte order is little-endian (for each 2-byte value, the least significant byte is first).
With this knowledge I was able to use Photoshop to create a test signal from a picture I found on the internet. The frame/field arangement I decided should be the simple one used by the SNES and N64 game consoles. This gives you 240p resolution, instead of 480i resolution. Basically each frame is approximately the height of what is normally a field (in effect repeating the same field for every image instead of alternating between fields, so the frame height is half the normal height but running at twice the frame rate). So each frame consists of exactly 263 scanlines, instead of 525 scan lines (if 525, it would be split into 2 fields with 262.5 lines per field). And the framerate is 60fps, instead of having a field rate of 60fps with a frame rate of 30fps. The first 20 lines are VBI, and then 243 lines of image, of which 3 are black lines (only 240 lines are actual image, because it is a 240p video signal).
I used only the tools in Photoshop to edit the image to make it into a transmittable NTSC frame. I used the levels control to do things like setting the sync, blanking, and black levels, and used the rectangular selection box to cut out the sync signals. I didn't bother with color because Photoshop doesn't have anything to make a chroma carrier with. I cut a few corners in terms of width and level of syncs, to be approximately correct instead of exactly by reading the NTSC specs. I've looked at the NTSC specs before, and got a general idea of approximately the values I need. One of the corners I cut for example is I made the equalizing pulses in the VBI have the same width as the HSync pulses, even though equalizing pulses are normally about half the width of the HSync pulses. I then used the levels control at the end to make sure to scale the output values to fit in the 0x0000 to 0x3FFF range expected by the DG1022Z's 14bit DAC. I then saved the it in 16bit mode to a raw file, and made sure that it had the correct file extension of RAF.
I used a USB stick to get this file into my DG and then manually set the output setting to 50ohm load, and 1Vpp signal amplitude. I set the correct sample rate in the scope then (12.6MSmp/Sec), based on the intended line rate of 15750 and frame width of 800 pixels (counting the total of image, blanking, and sync pixels on each line). Given the frame height of 263 pixels that gives me 59.89 frames/second, which is close enough to the 60 fields/sec for monochrome TV it shouldn't make a difference (note that for color TV it actually is 59.94 fields/sec, and 59.89 this is even farther off-spec than that, but it still seems to work for game consoles like the N64 or SNES).
Note that in a separate experiment with this DG and my Picoscope and a 50ohm terminator, I found the output voltage was actually about 1.2Vpp instead of 1Vpp. This might be an issue with the 50ohm BNC terminator I used, and that on a real TV it's internal 50ohm resistor might be much closer to actually 50ohms giving an actual 1.2Vpp signal. I also figured, I figured any decently built TV, even if it is getting a 1.2Vpp signal instead of the proper 1Vpp signal, would still likely be able to compensate for this and not actually be damaged, so I decided to go ahead with my TV experiment.
After getting the DG setup, I connected it to the composite video input of my TV, via an RCA video cable and a BNC-to-RCA adapter, and switched on the signal output of my DG. And guess what, IT ACTUALLY WORKED!!!!!!!!!!!!!!!!!!!!!!!
I got a picture of my TV's display with my cellphone camera, and I've attached that photo to this post.