如何使用 openCV 正确校准带有广角镜头的相机?

我正在尝试使用鱼眼镜头校准相机.因此我使用了鱼眼镜头模块,但无论我修复什么失真参数,都会得到奇怪的结果.这是我使用的输入图像:https://i.imgur.com/apBuAwF.png

I am trying to calibrate a camera with a fisheye lens. I therefor used the fisheye lens module, but keep getting strange results no matter what distortion parameters I fix. This is the input image I use: https://i.imgur.com/apBuAwF.png

红色圆圈表示我用来校准相机的角落.

where the red circles indicate the corners I use to calibrate my camera.

这是我能得到的最好的,输出:https://imgur.com/a/XeXk5

This is the best I could get, output: https://imgur.com/a/XeXk5

我目前不知道相机传感器的尺寸是多少,但根据在我的 nitrinsic 矩阵中计算的以像素为单位的焦距,我推断我的传感器尺寸约为 3.3 毫米(假设我的物理焦距是 1.8 毫米),这对我来说似乎很现实.然而,当不扭曲我的输入图像时,我会胡说八道.有人可以告诉我我可能做错了什么吗?

I currently don't know by heart what the camera sensor dimensions are, but based on the focal length in pixels that is being calculated in my nitrinsic matrix, I deduce my sensor size is approximately 3.3mm (assuming my physical focal length is 1.8mm), which seems realistic to me. Yet, when undistorting my input image I get nonsense. Could someone tell me what I may be doing incorrectly?

校准输出的矩阵和有效值:

the matrices and rms being output by the calibration:

K:[263.7291703200009, 0, 395.1618975493187;
 0, 144.3800397321767, 188.9308218101271;
 0, 0, 1]

D:[0, 0, 0, 0]

rms: 9.27628

我的代码:

#include <opencv2/opencv.hpp>
#include "opencv2/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/ccalib/omnidir.hpp"

using namespace std;
using namespace cv;

vector<vector<Point2d> > points2D;
vector<vector<Point3d> > objectPoints;

Mat src;

//so that I don't have to select them manually every time
void initializePoints2D()
{
    points2D[0].push_back(Point2d(234, 128));
    points2D[0].push_back(Point2d(300, 124));
    points2D[0].push_back(Point2d(381, 126));
    points2D[0].push_back(Point2d(460, 127));
    points2D[0].push_back(Point2d(529, 137));
    points2D[0].push_back(Point2d(207, 147));
    points2D[0].push_back(Point2d(280, 147));
    points2D[0].push_back(Point2d(379, 146));
    points2D[0].push_back(Point2d(478, 153));
    points2D[0].push_back(Point2d(551, 165));
    points2D[0].push_back(Point2d(175, 180));
    points2D[0].push_back(Point2d(254, 182));
    points2D[0].push_back(Point2d(377, 185));
    points2D[0].push_back(Point2d(502, 191));
    points2D[0].push_back(Point2d(586, 191));
    points2D[0].push_back(Point2d(136, 223));
    points2D[0].push_back(Point2d(216, 239));
    points2D[0].push_back(Point2d(373, 253));
    points2D[0].push_back(Point2d(534, 248));
    points2D[0].push_back(Point2d(624, 239));
    points2D[0].push_back(Point2d(97, 281));
    points2D[0].push_back(Point2d(175, 322));
    points2D[0].push_back(Point2d(370, 371));
    points2D[0].push_back(Point2d(578, 339));
    points2D[0].push_back(Point2d(662, 298));


    for(int j=0; j<25;j++)
    {   
        circle(src, points2D[0].at(j), 5, Scalar(0, 0, 255), 1, 8, 0);
    }

    imshow("src with circles", src);
    waitKey(0);
}

int main(int argc, char** argv)
{
    Mat srcSaved;

    src = imread("images/frontCar.png");
    resize(src, src, Size(), 0.5, 0.5);
    src.copyTo(srcSaved);

    vector<Point3d> objectPointsRow;
    vector<Point2d> points2DRow;
    objectPoints.push_back(objectPointsRow);
    points2D.push_back(points2DRow);

    for(int i=0; i<5;i++)
    {

        for(int j=0; j<5;j++)
        {
            objectPoints[0].push_back(Point3d(5*j,5*i,1));        
        }
    }

    initializePoints2D();
    cv::Matx33d K;
    cv::Vec4d D;
    std::vector<cv::Vec3d> rvec;
    std::vector<cv::Vec3d> tvec;


    int flag = 0;
    flag |= cv::fisheye::CALIB_RECOMPUTE_EXTRINSIC;
    flag |= cv::fisheye::CALIB_CHECK_COND;
    flag |= cv::fisheye::CALIB_FIX_SKEW; 
    flag |= cv::fisheye::CALIB_FIX_K1; 
    flag |= cv::fisheye::CALIB_FIX_K2; 
    flag |= cv::fisheye::CALIB_FIX_K3; 
    flag |= cv::fisheye::CALIB_FIX_K4; 


    double rms =cv::fisheye::calibrate(
 objectPoints, points2D, src.size(), 
 K, D, rvec, tvec, flag, cv::TermCriteria(3, 20, 1e-6)     
 );

    Mat output;
    cerr<<"K:"<<K<<endl;
    cerr<<"D:"<<D<<endl;
    cv::fisheye::undistortImage(srcSaved, output, K, D);
    cerr<<"rms: "<<rms<<endl;
    imshow("output", output);
    waitKey(0);

    cerr<<"image .size: "<<srcSaved.size()<<endl;

}

如果有人有想法,请随时分享一些 Python 代码或 C++ 代码.不管你的船是什么.

If anybody has an idea, feel free to either share some code in Python either in C++. Whatever floats your boat.

您可能已经注意到,我没有使用黑白棋盘格进行校准,而是使用构成我的地毯的瓷砖的角落.归根结底,我认为的目标是获取角坐标,这些坐标代表来自畸变半径的样本.地毯在某种程度上与棋盘格相同,唯一的区别 - 我再次认为 - 是在地毯上的角落处的高频边缘比在黑白棋盘格上的要少.

As you may have notice I don't use a black and white checkerboard for the calibration, but corners from tiles constituting my carpet. At the end of the day the goal -I think- is to get corner coordinates which represent samples from the distortion radii . The carpet is to some extent the same as the checkerboard, the only difference -once again I think- is the fact that you have less high frequency edges at those eg corners on the carpet than on a black and white checkerboard.

我知道图片的数量非常有限,即只有1张.我希望图像在某种程度上是不失真的,但我也希望不失真做得很好.但在这种情况下,图像输出看起来完全是胡说八道.

I know the number of pictures is very limited, ie only 1. I expect the image to be undistorted to some extent, but I also expect the undistortion to be very well done. But in this case the image output looks like total nonsense.

我最终将这张图片与棋盘一起使用:https://imgur.com/a/WlLBR由本网站提供:https://sites.google.com/site/scarabotix/ocamcalib-toolbox/ocamcalib-toolbox-download-page但是结果仍然很差:像我发布的其他输出图像一样的对角线.

I ended up using this image with a chessboard: https://imgur.com/a/WlLBR provided by this website: https://sites.google.com/site/scarabotix/ocamcalib-toolbox/ocamcalib-toolbox-download-page But results are still very poor: diagonal lines like the other output image I posted.

谢谢

推荐答案

您的第一个问题是您只使用一个图像.即使您有一个没有失真的理想针孔相机,您也无法从共面点的单个图像中估计内在函数.一张共面点的图像根本没有给你足够的约束来解决内在函数.

Your first problem is that you are only using one image. Even if you had an ideal pinhole camera with no distortion, you would not be able to estimate the intrinsics from a single image of co-planar points. One image of co-planar points simply does not give you enough constraints to solve for the intrinsics.

您需要至少两张不同 3D 方向的图像,或一个 3D 校准装置,其中点不共面.当然,实际上您需要至少 20 张图像才能进行准确校准.

You need at least two images at different 3D orientations, or a 3D calibration rig, where the points are not co-planar. Of course, in practice you need at least 20 images for accurate calibration.

您的第二个问题是您使用地毯作为棋盘格.您需要能够以亚像素精度检测图像中的点.小的定位误差会导致估计的相机参数出现大的误差.我严重怀疑您能否以任何合理的准确度检测地毯正方形的角落.事实上,您甚至无法非常准确地测量地毯上的实际点位置,因为它是模糊的.

Your second problem is that you are using a carpet as the checkerboard. You need to be able to detect the points in the image with sub-pixel accuracy. Small localization errors result in large errors in the estimated camera parameters. I seriously doubt that you can detect the corners of the squares of your carpet with any reasonable accuracy. In fact, you cannot even measure the actual point locations on the carpet very accurately, because it is fuzzy.

祝你好运!

相关文章