我使用OpenCV和校准过的立体相机对图像进行了视差处理,效果不错,校准数据也很好。
我需要计算像素点上的真实世界距离。
从stackoverflow上的其他问题中,我看到的方法是:
depth = baseline * focal / disparity
使用以下函数:
setMouseCallback("disparity", onMouse, &disp);
static void onMouse(int event, int x, int y, int flags, void* param)
{
cv::Mat &xyz = *((cv::Mat*)param); //cast and deref the param
if (event == cv::EVENT_LBUTTONDOWN)
{
unsigned int val = xyz.at<uchar>(y, x);
double depth = (camera_matrixL.at<float>(0, 0)*T.at<float>(0, 0)) / val;
cout << "x= " << x << " y= " << y << " val= " << val << " distance: " << depth<< endl;
}
}
我点击了距离立体相机3米的点。 我得到的结果是:
val= 31 distance: 0.590693
深度矩阵值介于0和255之间,深度矩阵的类型为
0
或CV_8UC1
。
立体基线为0.0643654
(以米为单位)。
焦距为284.493
我还尝试过:
(来自OpenCV-计算视差图的实际距离)float fMaxDistance = static_cast<float>((1. / T.at<float>(0, 0) * camera_matrixL.at<float>(0, 0)));
//outputDisparityValue is single 16-bit value from disparityMap
float fDisparity = val / (float)cv::StereoMatcher::DISP_SCALE;
float fDistance = fMaxDistance / fDisparity;
如果我们假设以毫米为单位,则给出了距离的更接近真实值,如下所示:val= 31 distance: 2281.27
,但仍然不正确。
这些方法哪一个是正确的?我在哪里出错了?
编辑:根据答案,我正在尝试以下操作:
`std::vector pointcloud;
float fx = 284.492615;
float fy = 285.683197;
float cx = 424;// 425.807709;
float cy = 400;// 395.494293;
cv::Mat Q = cv::Mat(4,4, CV_32F);
Q.at<float>(0, 0) = 1.0;
Q.at<float>(0, 1) = 0.0;
Q.at<float>(0, 2) = 0.0;
Q.at<float>(0, 3) = -cx; //cx
Q.at<float>(1, 0) = 0.0;
Q.at<float>(1, 1) = 1.0;
Q.at<float>(1, 2) = 0.0;
Q.at<float>(1, 3) = -cy; //cy
Q.at<float>(2, 0) = 0.0;
Q.at<float>(2, 1) = 0.0;
Q.at<float>(2, 2) = 0.0;
Q.at<float>(2, 3) = -fx; //Focal
Q.at<float>(3, 0) = 0.0;
Q.at<float>(3, 1) = 0.0;
Q.at<float>(3, 2) = -1.0 / 6; //1.0/BaseLine
Q.at<float>(3, 3) = 0.0; //cx - cx'
//
cv::Mat XYZcv(depth_image.size(), CV_32FC3);
reprojectImageTo3D(depth_image, XYZcv, Q, false, CV_32F);
for (int y = 0; y < XYZcv.rows; y++)
{
for (int x = 0; x < XYZcv.cols; x++)
{
cv::Point3f pointOcv = XYZcv.at<cv::Point3f>(y, x);
Eigen::Vector4d pointEigen(0, 0, 0, left.at<uchar>(y, x) / 255.0);
pointEigen[0] = pointOcv.x;
pointEigen[1] = pointOcv.y;
pointEigen[2] = pointOcv.z;
pointcloud.push_back(pointEigen);
}
}`
这使我有了一个云端。
StereoSGBM
和DisparityWLSFilter
。因为我手动创建了矩阵,所以我不使用ReprojectImageTo3d
,也没有Q
矩阵。深度图代码在这个基础上进行了大量修改:"https://github.com/k22jung/stereo_vision/blob/master/src/stereo_vision.cpp"我会添加一些图片。 - anti