Functions used form camera calibration
Declaration Syntax
C#  Visual Basic  Visual C++ 
public static class CameraCalibration
Public NotInheritable Class CameraCalibration
public ref class CameraCalibration abstract sealed
Members
All Members  Methods  
Icon  Member  Description 

CalibrateCamera(array<array<MCvPoint3D32f>[]()[]>[]()[], array<array<PointF>[]()[]>[]()[], Size, IntrinsicCameraParameters, CALIB_TYPE, array<ExtrinsicCameraParameters>[]()[]%) 
Estimates intrinsic camera parameters and extrinsic parameters for each of the views
 
DrawChessboardCorners(Image<(Of <(Gray, Byte>)>), Size, array<PointF>[]()[], Boolean) 
Draws the individual chessboard corners detected (as red circles) in case if the board was not found (patternWasFound== false) or the colored corners connected with lines when the board was found (patternWasFound == true).
 
FindChessboardCorners(Image<(Of <(Gray, Byte>)>), Size, CALIB_CB_TYPE, array<PointF>[]()[]%) 
attempts to determine whether the input image is a view of the chessboard pattern and locate internal chessboard corners
 
FindExtrinsicCameraParams2(array<MCvPoint3D32f>[]()[], array<PointF>[]()[], IntrinsicCameraParameters) 
Estimates extrinsic camera parameters using known intrinsic parameters and and extrinsic parameters for each view. The coordinates of 3D object points and their correspondent 2D projections must be specified. This function also minimizes backprojection error.
 
FindHomography(Matrix<(Of <(Single>)>), Matrix<(Of <(Single>)>)) 
Use all points to find perspective transformation H=h_ij between the source and the destination planes
 
FindHomography(Matrix<(Of <(Single>)>), Matrix<(Of <(Single>)>), Double) 
Use RANDSAC to finds perspective transformation H=h_ij between the source and the destination planes
 
FindHomography(Matrix<(Of <(Single>)>), Matrix<(Of <(Single>)>), HOMOGRAPHY_METHOD, Double) 
Use the specific method to find perspective transformation H=h_ij between the source and the destination planes
 
FindHomography(array<PointF>[]()[], array<PointF>[]()[], HOMOGRAPHY_METHOD, Double) 
Finds perspective transformation H=h_ij between the source and the destination planes
 
ProjectPoints2(array<MCvPoint3D32f>[]()[], ExtrinsicCameraParameters, IntrinsicCameraParameters, array<Matrix<(Of <(Single>)>)>[]()[]) 
Computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters.
Optionally, the function computes jacobians  matrices of partial derivatives of image points as functions of all the input parameters w.r.t. the particular parameters, intrinsic and/or extrinsic.
The jacobians are used during the global optimization in cvCalibrateCamera2 and cvFindExtrinsicCameraParams2.
The function itself is also used to compute backprojection error for with current intrinsic and extrinsic parameters.
 
StereoCalibrate(array<array<MCvPoint3D32f>[]()[]>[]()[], array<array<PointF>[]()[]>[]()[], array<array<PointF>[]()[]>[]()[], IntrinsicCameraParameters, IntrinsicCameraParameters, Size, CALIB_TYPE, MCvTermCriteria, ExtrinsicCameraParameters%, Matrix<(Of <(Double>)>)%, Matrix<(Of <(Double>)>)%) 
Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the fist camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2)  we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that:
R2=R*R1,
T2=R*T1 + T
 
Undistort2<(Of <(TColor, TDepth>)>)(Image<(Of <(TColor, TDepth>)>), IntrinsicCameraParameters) 
Transforms the image to compensate radial and tangential lens distortion.
The camera matrix and distortion parameters can be determined using cvCalibrateCamera2. For every pixel in the output image the function computes coordinates of the corresponding location in the input image using the formulae in the section beginning. Then, the pixel value is computed using bilinear interpolation. If the resolution of images is different from what was used at the calibration stage, fx, fy, cx and cy need to be adjusted appropriately, while the distortion coefficients remain the same

Inheritance Hierarchy
Object  
CameraCalibration 