JSON Output  from Calibration Software

JSON Output from Calibration Software

Understanding Camera Calibration Parameters

In this blogpost each of the individual values are described from the JSON output file after camera calibration. Knowing what each parameter is and does, makes it easier to use in custom solutions and integrations. The camera calibration software works for both monocular cameras and stereo cameras. For stereo calibration, each camera is calibrated individually and then calibrated together based on the results. There are additional output parameters for stereo calibration which can map from 2D space to 3D space and also the relation between the two cameras. The software will be extended to n-view camera calibration as well.

 

Single Camera Output Parameters

1. Camera Matrix (camera_matrix)

This matrix contains the intrinsic parameters of the camera, which are essential for translating 3D points into 2D image points. It includes the focal length and the coordinates of the principal point (the optical center of the image).

2. Focal Length X (fx)

This is the focal length of the camera along the x-axis (horizontal direction). It represents how strongly the camera converges or diverges light in the horizontal plane.

3. Focal Length Y (fy)

This is the focal length of the camera along the y-axis (vertical direction). It represents how strongly the camera converges or diverges light in the vertical plane.

4. Principal Point X (cx)

This is the x-coordinate of the principal point, which is usually at the center of the image. It represents the horizontal offset of the optical center from the top-left corner of the image.

5. Principal Point Y (cy)

This is the y-coordinate of the principal point, which is usually at the center of the image. It represents the vertical offset of the optical center from the top-left corner of the image.

6. Distortion Coefficients (distortion)

These coefficients are used to correct lens distortions in the captured images. Lenses can introduce barrel or pincushion distortions, making straight lines appear curved. The distortion coefficients help adjust the image to remove these distortions.

7. Rotation Vectors (rotation_vecs)

These vectors describe the rotation of the camera in 3D space for each calibration image. Each rotation vector indicates how the camera is oriented when a particular image was taken.

8. Translation Vectors (translation_vecs)

These vectors describe the translation (movement) of the camera in 3D space for each calibration image. Each translation vector indicates how the camera is positioned relative to the origin of the world coordinate system when a particular image was taken.

 

Stereo Camera Output Parameters

Left Camera Matrix (left_camera_matrix)

This is a matrix that contains the intrinsic parameters of the left camera. It describes the focal length and the optical center (principal point) of the left camera. In simple terms, it helps the camera understand how to convert the 3D world into a 2D image.

Left Distortion Coefficients (left_distortion)

These coefficients correct distortions in the left camera lens. Lenses can create barrel or pincushion distortions, making straight lines appear curved. These coefficients adjust the image to look more like what you see with the naked eye.

Right Camera Matrix (right_camera_matrix)

Similar to the left camera matrix, this matrix contains the intrinsic parameters for the right camera. It helps the right camera convert the 3D world into a 2D image correctly.

Right Distortion Coefficients (right_distortion)

These coefficients correct distortions in the right camera lens, ensuring the captured images are free from lens-induced distortions.

Rotation Matrix (rotation_matrix)

This matrix describes how to rotate the left camera’s coordinate system to align it with the right camera’s coordinate system. It essentially tells how the cameras are oriented relative to each other.

Translation Vector (translation_vector)

This vector describes the distance and direction from the left camera to the right camera. It tells you how far apart the cameras are and in which direction one is relative to the other.

Essential Matrix (essential_matrix)

The essential matrix encapsulates the intrinsic parameters of both cameras and the rotation and translation between them. It is used in computing the relative position of the cameras in 3D space.

Fundamental Matrix (fundamental_matrix)

This matrix is used to map points in the left image to lines in the right image. It helps find corresponding points between the two images taken by the stereo cameras.

Rectification Matrix Left (rectification_matrix_left)

This matrix is used to adjust the left image so that both images (left and right) are aligned as if the cameras were perfectly parallel. This makes it easier to compare the two images for depth calculation.

Rectification Matrix Right (rectification_matrix_right)

Similar to the left rectification matrix, this matrix adjusts the right image for alignment with the left image.

Projection Matrix Left (projection_matrix_left)

This matrix transforms 3D points into the 2D image plane of the left camera after rectification. It combines the camera matrix and rectification adjustments.

Projection Matrix Right (projection_matrix_right)

Similar to the left projection matrix, this transforms 3D points into the 2D image plane of the right camera after rectification.

Disparity-to-Depth Mapping (disparity-to-depth_mapping)

This matrix (often denoted as Q) is used to convert disparity (the difference in coordinates of a point in the left and right images) into actual depth information. It helps in creating a 3D map from the stereo images.

Back to blog