OpenCV:断言失败(src.checkVector(2,CV_32F)

3

我目前正在尝试在UIImage扩展内更正图像的透视。

当调用getPerspectiveTranform时,我得到以下断言错误。

错误

OpenCV Error: Assertion failed (src.checkVector(2, CV_32F) == 4 && dst.checkVector(2, CV_32F) == 4) in getPerspectiveTransform, file /Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/imgwarp.cpp, line 6748
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/imgwarp.cpp:6748: error: (-215) src.checkVector(2, CV_32F) == 4 && dst.checkVector(2, CV_32F) == 4 in function getPerspectiveTransform

代码

- (UIImage *)performPerspectiveCorrection {
    Mat src = [self genereateCVMat];
    Mat thr;
    cv::cvtColor(src, thr, CV_BGR2GRAY);

    cv::threshold(thr, thr, 70, 255, CV_THRESH_BINARY);

    std::vector< std::vector <cv::Point> > contours; // Vector for storing contour
    std::vector< cv::Vec4i > hierarchy;
    int largest_contour_index=0;
    int largest_area=0;

    cv::Mat dst(src.rows,src.cols, CV_8UC1, cv::Scalar::all(0)); //create destination image

    cv::findContours(thr.clone(), contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0)); // Find the contours in the image

    for (int i = 0; i< contours.size(); i++) {
        double a = cv::contourArea(contours[i], false); //  Find the area of contour
        if (a > largest_area){
            largest_area=a;
            largest_contour_index=i; //Store the index of largest contour
        }
    }

    cv::drawContours( dst,contours, largest_contour_index, cvScalar(255,255,255),CV_FILLED, 8, hierarchy );

    std::vector<std::vector<cv::Point> > contours_poly(1);
    approxPolyDP( cv::Mat(contours[largest_contour_index]), contours_poly[0],5, true );
    cv::Rect boundRect = cv::boundingRect(contours[largest_contour_index]);

    if(contours_poly[0].size() >= 4){
        std::vector<cv::Point> quad_pts;
        std::vector<cv::Point> squre_pts;

        quad_pts.push_back(cv::Point(contours_poly[0][0].x,contours_poly[0][0].y));
        quad_pts.push_back(cv::Point(contours_poly[0][1].x,contours_poly[0][1].y));
        quad_pts.push_back(cv::Point(contours_poly[0][3].x,contours_poly[0][3].y));
        quad_pts.push_back(cv::Point(contours_poly[0][2].x,contours_poly[0][2].y));

        squre_pts.push_back(cv::Point(boundRect.x,boundRect.y));
        squre_pts.push_back(cv::Point(boundRect.x,boundRect.y+boundRect.height));
        squre_pts.push_back(cv::Point(boundRect.x+boundRect.width,boundRect.y));
        squre_pts.push_back(cv::Point(boundRect.x+boundRect.width,boundRect.y+boundRect.height));

        Mat transmtx = getPerspectiveTransform(quad_pts, squre_pts);
        Mat transformed = Mat::zeros(src.rows, src.cols, CV_8UC3);
        warpPerspective(src, transformed, transmtx, src.size());

        return [UIImage imageByCVMat:transformed];
    }
    else {
        NSLog(@"Make sure that your are getting 4 corner using approxPolyDP...");
        return self;
    }
}
1个回答

4
我知道已经很晚了,但我遇到了同样的问题,也许这会帮助到其他人。
发生错误的原因是在getPerspectiveTransform(src,dst)中,src和dst必须是vector<Point2f>类型,而不是vector<Point>类型。
所以应该像这样:
std::vector<cv::Point2f> quad_pts;
std::vector<cv::Point2f> squre_pts;


quad_pts.push_back(cv::Point2f(contours_poly[0][0].x,contours_poly[0][0].y));

// etc.

squre_pts.push_back(cv::Point2f(boundRect.x,boundRect.y));

//etc.

谢谢,那不是很清楚。 - maxgalbu

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接