Calibrate the Intel RealSense d435 camera using OpenCV2 and ROS

Hello!


I want to share my experience with the camera Intel RealSense, model d435 . As is known, many computer vision algorithms require pre- calibration of the camera . It so happens that we on our project use ROS to assemble individual components of an automated intelligent system. However, having studied the Russian-language Internet, I did not find any sensible tutorials on this topic. This publication is intended to fill this gap.


Required software


Since ROS runs on Unix systems, I will assume that we have Ubuntu 16.04 system available. I will not describe the detailed installation details, I will only give links to the corresponding tutorials.



sudo apt-get install python-opencv 


Installing RealSense drivers


  1. First of all, you need to install drivers for the camera.
  2. ROS-package for the camera is here . At the time of publication, the latest version was 2.0.3. To install a package, you need to download the source code and unpack it in the ROS home directory. Next, we will need to install it:

 catkin_make clean catkin_make -DCATKIN_ENABLE_TESTING=False -DCMAKE_BUILD_TYPE=Release catkin_make install echo "source path_to_workspace/devel/setup.bash" >> ~/.bashrc source ~/.bashrc 

Testing the camera


After we installed the camera, we need to make sure that the drivers work as they should. To do this, we connect the camera via USB and run the demo:


 roslaunch realsense2_camera demo_pointcloud.launch 

This command will open the ROS visualization, where you can see a cloud of points registered in the topic /camera/depth/color/points :


rviz


Camera calibration


Below is an adapted version of the tutorial from OpenCV .


 import numpy as np import cv2 import glob #    criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) #     8x6 objp = np.zeros((6*8,3), np.float32) objp[:,:2] = np.mgrid[0:8,0:6].T.reshape(-1,2) #         objpoints = [] # 3d     imgpoints = [] # 2d     images = glob.glob('/__/*.png') for fname in images: img = cv2.imread(fname) gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) #     ret, corners = cv2.findChessboardCorners(gray, (8,6),None) #    ,         if ret == True: objpoints.append(objp) corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria) imgpoints.append(corners2) #       img = cv2.drawChessboardCorners(img, (8,6), corners2,ret) cv2.imshow('img',img) cv2.waitKey(500) cv2.destroyAllWindows() #     ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None) np.save('/path_to_images/calibration', [mtx, dist, rvecs, tvecs]) 

In order for this script to work, we need at least 10 images of a chessboard, obtained from our camera. For this we can use, for example, the image_view ROS package or any other program that can take screenshots from a USB camera. The captured images should be placed in any folder. Sample image:


Chess table


After we run the script, the calibration results will be saved to a file.
calibration.npy . This data can then be used with the following script:


 calibration_data = np.load('path_to_images/calibration.npy') mtx = calibration_data[0] dist = calibration_data[1] rvecs = calibration_data[2] tvecs = calibration_data[3] 

Conclusion


We were able to successfully calibrate the RealSense d435 camera using OpenCV2 and ROS. The calibration results can be used in applications such as tracking objects, aruco markers, augmented reality algorithms and many others. In the next article, I would like to elaborate on tracking aruco markers in more detail.

Source: https://habr.com/ru/post/412953/


All Articles