Hello.
I just reviewed the stereo_image_proc tutorial and image_pipeline/Camera info webpage and there's something I'm unclear with.

I understand I get 3D point cloud from stereo_image_proc with respect to left imager optical frame.
Now, I'm interested in getting 2D depth map with pixel values set equal to depth of the point cloud. (No. I do NOT want disparity map.)

Only way I can think of doing this is multiply K from left/camera_info sensor_msgs with the vector [X/Z, Y/Z, 1] from the point cloud and set the pixel value to Z.
Is there another transform or parameter I need to introduce?
Or even better, is there a ROS implementation of this functionality already?

Thank you.