Is live morphing the video from a webcam feasible?
Just for fun, I checked how long it takes on an Intel Core i5-7200U to morph a 1920x1080 full-color image using an arbitrary mapping, and the answer was about 3 milliseconds. So, real-time webcam morphing with a static mapping is definitely feasible.
I don't know how feasible it is with OpenCV, though.
The age-old mapping technique is to precalculate an array, one element per output pixel, identifying the source pixel (typically as the offset to the source image origin). In C,
uint32_t *map; // Dynamically allocated, map[HEIGHT][WIDTH] uint32_t *src; // Dynamically allocated, src[HEIGHT][WIDTH] uint32_t *dst; // Dynamically allocated, dst[HEIGHT][WIDTH]where
map is precalculated for each transform (including scaling) only once, i.e.
map[dx + dy*WIDTH] = sx + sy*WIDTH;where
(dx,dy) is the transformed image point,
(sx,sy) is the corresponding source image point, with
dx=0..WIDTH-1,
dy=0..HEIGHT-1,
sx=0..WIDTH-1, and
sy=0..HEIGHT-1.
You can reserve an extra pixel in the source and destination arrays, so that index
WIDTH*HEIGHT can be used for pixels that cannot be mapped or map to outside the source image.
The actual mapping loop you do for each image frame, assuming 32-bit pixels, is then
size_t i = WIDTH * HEIGHT; while (i-->0) dst[i] = src[map[i]];and as I already said, on old Intel Core i5-7200U this takes about 3 milliseconds with
WIDTH=1920,
HEIGHT=1080.