I am trying to get the "*_mono16" video modes working with camera1394. If I just copy the two-byte pixels, image_view of camera_raw fails with an opencv exception. I can't tell if this is a bug in the camera driver or in image_view. I've looked at docs and code, but I'm still not clear how 16-bit monochrome image data should be encoded. The two-byte per pixel copy is modeled on cameradc1394. Is this snippet of code wrong? case DC1394_VIDEO_MODE_640x480_MONO16: case DC1394_VIDEO_MODE_800x600_MONO16: case DC1394_VIDEO_MODE_1024x768_MONO16: case DC1394_VIDEO_MODE_1280x960_MONO16: if (!DoBayerConversion_) { image.step=image.width*2; image_size = image.height*image.step; image.encoding = enc::MONO16; image.data.resize(image_size); memcpy(&image.data[0], capture_buffer, image_size); } Could this be a little-endian problem? -- joq