Stereo Imaging

From Emgu CV: OpenCV in .NET (C#, VB, C++ and more)
Jump to: navigation, search

==Stereo Imaging== (Under Construction)

Namespace

Emgu.CV.CameraCalibration

References

EMGU Reference
EMGU Camera Calibration
OpenCV

Downloads

Source Code V1.0

Example

The following example shows the use of the stereo calibration function within EMGU to produce a matched stereo camera pair for 3D re-construction. This library calibrates the FOV of two cameras and uses their intrinsics to calculate the alignment between the paired images. The following example shows the the use of the CameraCalibration class in 3D re-construction of a pair of images from web cameras.

Software

CalibrationCalibrated


Ideal Settings

LeftRightDisparity Map


Pre-Requisites

The code provided should run straight out of the Emgu.Example folder (V2.4.2), extract it to this location. If the code fails to execute re-reference the EMGU libraries and include the required opencv dlls in the bin directory. Note that the project is set to build to the output path "..\..\..\bin\" you may wish to change this if you don't extract to the EMGU.Example folder.

Two cameras are required for this application. They don't have to be the same (although this is suggested), if not additional frame adjustments may be required to produce frames of the same size. Matched camera pairs will produce the best results as the will have similar CCD size and lenses.

To run 3D re-construction calibration of the cameras must be done first you will require a chess board to do this. There is one available here (,pdf) (or png here), print this out and place it onto a flat surface. For more information on simple camera calibration see here.


EMGU Coding Level: The rated level for this example is Advanced. This is not designed as a beginners tutorial and general knowledge of the EMGU video capture example is expected. While comments are provided in the code you may find some basic detail missed.


The Code

The code provided in this sample is basic there is little or no error checking. Support is available through the Forums but please try and examine the code before saying it doesn't work for you. The code is not optimised instead is better formatted to provided an understanding of the stages involved.


The Code: Variables

Setting of the chessboard size is achieved using width and heigh variables try playing with these and with a larger chessboard to see what effects they have on calibration. The Bgr array line_colour_array is used to draw the detected chessboard corners on the image.

            const int width = 9;//9 //width of chessboard no. squares in width - 1
            const int height = 6;//6 // heght of chess board no. squares in heigth - 1
            Size patternSize = new Size(width, height); //size of chess board to be detected
            Bgr[] line_colour_array = new Bgr[width * height]; // just for displaying coloured lines of detected chessboard


These buffers store the relevant calibration information acquired from FindChessboardCorners(), they are set to 100 by default using the static int buffer_length. If you are running a slower system reduce this before executing the program.

            MCvPoint3D32f[][] corners_object_Points = new MCvPoint3D32f[buffer_length][]; //stores the calculated size for the chessboard
            PointF[][] corners_points_Left = new PointF[buffer_length][];//stores the calculated points from chessboard detection Camera 1
            PointF[][] corners_points_Right = new PointF[buffer_length][];//stores the calculated points from chessboard detection Camera 2


The intrinsic parameters used to map the pair of images together are stored in IntrinsicCam1 and IntrinsicCam2 . Depending on the calibration method used CALIB_TYPE some parameters of these intrinsics will need setting up before being used (see here.

            IntrinsicCameraParameters IntrinsicCam1 = new IntrinsicCameraParameters();
            IntrinsicCameraParameters IntrinsicCam2 = new IntrinsicCameraParameters();
            ExtrinsicCameraParameters EX_Param;


To apply the StereoCalibrate a Q map is generated using CvInvoke.cvStereoRectify(). The output from his algorithm is stored in the following variables. We only use the Q matrix in producing the disparity map.

            Matrix<double> fundamental; //fundemental output matrix for StereoCalibrate
            Matrix<double> essential; //essential output matrix for StereoCalibrate
            Rectangle Rec1 = new Rectangle(); //Rectangle Calibrated in camera 1
            Rectangle Rec2 = new Rectangle(); //Rectangle Caliubrated in camera 2
            Matrix<double> Q = new Matrix<double>(4, 4); //This is what were interested in the disparity-to-depth mapping matrix
            Matrix<double> R1 = new Matrix<double>(3, 3); //rectification transforms (rotation matrices) for Camera 1.
            Matrix<double> R2 = new Matrix<double>(3, 3); //rectification transforms (rotation matrices) for Camera 1.
            Matrix<double> P1 = new Matrix<double>(3, 4); //projection matrices in the new (rectified) coordinate systems for Camera 1.
            Matrix<double> P2 = new Matrix<double>(3, 4); //projection matrices in the new (rectified) coordinate systems for Camera 2.
            private MCvPoint3D32f[] _points; //Computer3DPointsFromStereoPair


The Code: Methods

There are numerous methods used in the code only the 3 most relevant will be discussed. This is an advanced coding example, additional comments are available in the code. Other methods deal with on form events and thread safe accessing of controls etc.

public Form1()

Here the Bgr buffer is filled to display the detected chessboard corners, the same array is used to draw both arrays on there associative image. An example of how looking up cameras available to the system is given this makes use of the DirectShow Library provided. A better use of this is provided here. The method checks to see if more than one camera is detected if only 1 is detected a message box is shown with an error if none are detected the program will throw an error. If two or more are detected the default device and the next device in the list are used. You may want to change this if you have an inbuilt camera into a laptop and your using two USB web cameras.


            //-> Find systems cameras with DirectShow.Net dll
            //thanks to carles lloret
            DsDevice[] _SystemCamereas = DsDevice.GetDevicesOfCat(FilterCategory.VideoInputDevice);
            Video_Device[] WebCams = new Video_Device[_SystemCamereas.Length];
            for (int i = 0; i < _SystemCamereas.Length; i++) WebCams[i] = new Video_Device(i, _SystemCamereas[i].Name, _SystemCamereas[i].ClassID);
 
            //check to see what video inputs we have available
            if (WebCams.Length < 2)
            {
                if (WebCams.Length == 0) throw new InvalidOperationException("A camera device was not detected");
                MessageBox.Show("Only 1 camera detected. Stero Imaging can not be emulated");
            }
            else if (WebCams.Length >= 2)
            {
                if (WebCams.Length > 2) MessageBox.Show("More than 2 cameras detected. Stero Imaging will be performed using " + WebCams[0].Device_Name + " and " + WebCams[1].Device_Name);
                _Capture1 = new Capture(WebCams[0].Device_ID);
                _Capture2 = new Capture(WebCams[1].Device_ID);
                //We will only use 1 frame ready event this is not really safe but it fits the purpose
                _Capture1.ImageGrabbed += ProcessFrame;
                //_Capture2.Start(); //We make sure we start Capture device 2 first
                _Capture1.Start();
            }


ProcessFrame()

There are 4 sections to this method. The first #region deals simply with the Frame Acquisition from both cameras. If you are using two different camera types you may have to resize the acquired frames to be the same here.

The second #region 'Saving Chessboard Corners in Buffer' fills the PointF[][] buffers, for the left and right camera, with successful points aquired from detecting a chessboard pattern from within the aquired frame. The buffers size is controlled by the global variable buffer_length, the default is 50 frame. The more frames the better the calibration however the longer processing will take. The camera intrinsics can be pre-calculated and loaded. For more details on this code see the Camera Calibration in which this method is used on one camera.

Once the buffers are filled with points that can match the camera intrinsics the third stage is initiated #region Calculating Stereo Cameras Relationship. Here the real world measurements of the chessboard are stored to an array (in this example the assumed square size is 20*20mm (see here for details)). The first method that we are interested in is the CameraCalibration.StereoCalibrate() method. This method is used to calculate the camera intrinsics. If these are known they can be loaded this method can take a while to finish executing so a message box will show to notify of completion.

The next method is the cvStereoRectify this is a CvInvoke method call that computes the rectification transforms for each head of the calibrated stereo cameras. While a lot of variables are passed and produced by this method call the only variable rally required is the Q matrix or the disparity-to-depth mapping matrix. This can be saved and loaded once a pair of cameras have been stereo calibrated. If cameras are moved they will require re-calibration so a jig would be required to fit these cameras securely.


            #region Calculating Stereo Cameras Relationship
            if (currentMode == Mode.Caluculating_Stereo_Intrinsics)
            {
                //fill the MCvPoint3D32f with correct mesurments
                for (int k = 0; k < buffer_length; k++)
                {
                    //Fill our objects list with the real world mesurments for the intrinsic calculations
                    List<MCvPoint3D32f> object_list = new List<MCvPoint3D32f>();
                    for (int i = 0; i < height; i++)
                    {
                        for (int j = 0; j < width; j++)
                        {
                            object_list.Add(new MCvPoint3D32f(j * 20.0F, i * 20.0F, 0.0F));
                        }
                    }
                    corners_object_Points[k] = object_list.ToArray();
                }
                //If Emgu.CV.CvEnum.CALIB_TYPE == CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be initialized before calling the function
                //if you use FIX_ASPECT_RATIO and FIX_FOCAL_LEGNTH options, these values needs to be set in the intrinsic parameters before the CalibrateCamera function is called. Otherwise 0 values are used as default.
                CameraCalibration.StereoCalibrate(corners_object_Points, corners_points_Left, corners_points_Right, IntrinsicCam1, IntrinsicCam2, frame_S1.Size,
                                                                 Emgu.CV.CvEnum.CALIB_TYPE.DEFAULT, new MCvTermCriteria(0.1e5), 
                                                                 out EX_Param, out fundamental, out essential);
                MessageBox.Show("Intrinsci Calculation Complete"); //display that the mothod has been succesful
                //currentMode = Mode.Calibrated;
 
                //Computes rectification transforms for each head of a calibrated stereo camera.
                CvInvoke.cvStereoRectify(IntrinsicCam1.IntrinsicMatrix, IntrinsicCam2.IntrinsicMatrix,
                                         IntrinsicCam1.DistortionCoeffs, IntrinsicCam2.DistortionCoeffs,
                                         frame_S1.Size,
                                         EX_Param.RotationVector.RotationMatrix, EX_Param.TranslationVector,
                                         R1, R2, P1, P2, Q,
                                         Emgu.CV.CvEnum.STEREO_RECTIFY_TYPE.DEFAULT, 0,
                                         frame_S1.Size, ref Rec1, ref Rec2);
 
                //This will Show us the usable area from each camera
                MessageBox.Show("Left: " + Rec1.ToString() +  " \nRight: " + Rec2.ToString());
                currentMode = Mode.Calibrated;
 
            }
            #endregion


The final stage is the production of the disparity map this is produced by calling the Computer3DPointsFromStereoPair() method. This method comes from the Simple3DReconstruction method however the StereoSGBM class is used instead of the less accurate StereoBM class.


Computer3DPointsFromStereoPair()

This method acquires values from the sliders on the form using the GetSliderValue method which is a simple Thread safe call. The restrictions to the values are given P1 and P2 variables are calculated according to opencv reference rather than using a slider. Alternatively additional sliders can be introduced to set these if required. The produced disparity map can be used to prject the aquired images back onto a 3D plan see the Simple3DReconstruction example provided with EMGU.


            using (StereoSGBM stereoSolver = new StereoSGBM(minDispatities, numDisparities, SAD, P1, P2, disp12MaxDiff, PreFilterCap, UniquenessRatio, Speckle, SpeckleRange, fullDP))
            //using (StereoBM stereoSolver = new StereoBM(Emgu.CV.CvEnum.STEREO_BM_TYPE.BASIC, 0))
            {
                stereoSolver.FindStereoCorrespondence(left, right, disparityMap);//Computes the disparity map using: 
                /*GC: graph cut-based algorithm
                  BM: block matching algorithm
                  SGBM: modified H. Hirschmuller algorithm HH08*/
                points = PointCollection.ReprojectImageTo3D(disparityMap, Q); //Reprojects disparity image to 3D space.
            }

Methods Available

Used

  • CalibrateCamera(Point3D<Single>[][], Point2D<Single>[][], MCvSize, IntrinsicCameraParameters, CALIB_TYPE, ExtrinsicCameraParameters[])
  • FindChessboardCorners(Image<Gray, Byte>, MCvSize, CALIB_CB_TYPE, Point2D<Single>[])
  • StereoCalibrate(Point3D<Single>[][], Point2D<Single>[][], Point2D<Single>[][], IntrinsicCameraParameters, IntrinsicCameraParameters, MCvSize, CALIB_TYPE, MCvTermCriteria, ExtrinsicCameraParameters, Matrix<Single>, Matrix<Single>)

Unused

  • DrawChessboardCorners(Image<Gray, Byte>, MCvSize, Point2D<Single>[], Boolean)
  • FindExtrinsicCameraParams2(Point3D<Single>[], Point2D<Single>[], IntrinsicCameraParameters)
  • FindHomography(Matrix<Single>, Matrix<Single>)
  • FindHomography(Matrix<Single>, Matrix<Single>, Double)
  • FindHomography(Matrix<Single>, Matrix<Single>, HOMOGRAPHY_METHOD, Double)
  • FindHomography(Point2D<Single>[], Point2D<Single>[], HOMOGRAPHY_METHOD, Double)
  • ProjectPoints2(Point3D<Single>[], ExtrinsicCameraParameters, IntrinsicCameraParameters, Matrix<Single>[])
  • Undistort2<C, D>(Image<C, D>, IntrinsicCameraParameters)


Extra Methods

  • StereoSGBM(int minDisparity, int numDisparities, int SADWindowSize, int P1, int P2, int disp12MaxDiff, int preFilterCap, int uniquenessRatio, int speckleWindowSize, int speckleRange, bool fullDP);
  • MCvPoint3D32f[] ReprojectImageTo3D(Image<Gray, short> disparity, Matrix<double> Q);
  • cvStereoRectify(IntPtr cameraMatrix1, IntPtr cameraMatrix2, IntPtr distCoeffs1, IntPtr distCoeffs2, MCvSize imageSize, IntPtr R, IntPtr T, IntPtr R1, IntPtr R2, IntPtr P1, IntPtr P2, IntPtr Q, STEREO_RECTIFY_TYPE flags)


Bugs

  1. There is a refresh problem when displaying the disparity map image in some cases. This is dues to the demand on processing the main images.
  2. Currently an event from the first camera is used to acquire both frames. this is not ideal and will be addressed in future renditions of the program