Opencv - 特征点匹配 + 单应性矩阵计算结果不正确

4

我一直在遇到一个问题,就是检测到的物体轮廓不在正确的位置,好像坐标错了。我的Hessian设置为2000,并且我过滤了距离最小值的3倍以下的匹配项。感谢任何帮助。

运行匹配和单应性的结果:

下面是代码示例:

public static void findMatches()
{
    System.loadLibrary(Core.NATIVE_LIBRARY_NAME);

    //Load Image 1 
    Mat img_object = Highgui.imread("./resources/Database/box.png"); 
    //Load Image 2
    Mat img_scene = Highgui.imread("./resources/Database/box_in_scene.png");

    //Check if either image is null if so exit application
    if (img_object == null || img_scene == null) 
        {
            System.exit(0);
        }

    //Convert Image 1 to greyscale
    Mat grayImageobject = new Mat(img_object.rows(), img_object.cols(), img_object.type());
    Imgproc.cvtColor(img_object, grayImageobject, Imgproc.COLOR_BGRA2GRAY);
    Core.normalize(grayImageobject, grayImageobject, 0, 255, Core.NORM_MINMAX);

    //Convert image 2 to greyscale
    Mat grayImageScene = new Mat(img_scene.rows(), img_scene.cols(), img_scene.type());
    Imgproc.cvtColor(img_scene, grayImageScene, Imgproc.COLOR_BGRA2GRAY);
    Core.normalize(grayImageScene, grayImageScene, 0, 255, Core.NORM_MINMAX);

    //Create a SURF feature detector
    FeatureDetector detector = FeatureDetector.create(4); //4 = SURF

    //Cannot input hessian value as normal so we have to write the desired value into a 
    //file and then read value from file into detector.read
    try (Writer writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream("hessian.txt"), "utf-8"))) {
    writer.write("%YAML:1.0\nhessianThreshold: 2000.\noctaves:3\noctaveLayers: 4\nupright: 0\n");
    } catch (IOException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }

    detector.read("hessian.txt");

    //Mat of keypoints for object and scene
    MatOfKeyPoint keypoints_object = new MatOfKeyPoint();
    MatOfKeyPoint keypoints_scene  = new MatOfKeyPoint();

    //Detect keypoints in scene and object storing them in mat of keypoints
    detector.detect(img_object, keypoints_object);
    detector.detect(img_scene, keypoints_scene);

    DescriptorExtractor extractor = DescriptorExtractor.create(2); //2 = SURF;

    Mat descriptor_object = new Mat();
    Mat descriptor_scene = new Mat() ;

    extractor.compute(img_object, keypoints_object, descriptor_object);
    extractor.compute(img_scene, keypoints_scene, descriptor_scene);

    DescriptorMatcher matcher = DescriptorMatcher.create(1); // 1 = FLANNBASED
    MatOfDMatch matches = new MatOfDMatch();

    matcher.match(descriptor_object, descriptor_scene, matches);
    List<DMatch> matchesList = matches.toList();

    Double max_dist = 0.0;
    Double min_dist = 100.0;

    for(int i = 0; i < descriptor_object.rows(); i++){
        Double dist = (double) matchesList.get(i).distance;
        if(dist < min_dist) min_dist = dist;
        if(dist > max_dist) max_dist = dist;
    }

    System.out.println("-- Max dist : " + max_dist);
    System.out.println("-- Min dist : " + min_dist);    

    LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
    MatOfDMatch gm = new MatOfDMatch();

    for(int i = 0; i < descriptor_object.rows(); i++){
        if(matchesList.get(i).distance < 3*min_dist){
            good_matches.addLast(matchesList.get(i));
        }
    }

    gm.fromList(good_matches);

    Mat img_matches = new Mat();
    Features2d.drawMatches(img_object,keypoints_object,img_scene,keypoints_scene, gm, img_matches, new Scalar(255,0,0), new Scalar(0,0,255), new MatOfByte(), 2);

    if(good_matches.size() >= 10){

    LinkedList<Point> objList = new LinkedList<Point>();
    LinkedList<Point> sceneList = new LinkedList<Point>();

    List<KeyPoint> keypoints_objectList = keypoints_object.toList();
    List<KeyPoint> keypoints_sceneList = keypoints_scene.toList();

    for(int i = 0; i<good_matches.size(); i++){
        objList.addLast(keypoints_objectList.get(good_matches.get(i).queryIdx).pt);
        sceneList.addLast(keypoints_sceneList.get(good_matches.get(i).trainIdx).pt);
    }

    MatOfPoint2f obj = new MatOfPoint2f();
    obj.fromList(objList);

    MatOfPoint2f scene = new MatOfPoint2f();
    scene.fromList(sceneList);

    Mat homography = Calib3d.findHomography(obj, scene);

    Mat obj_corners = new Mat(4,1,CvType.CV_32FC2);
    Mat scene_corners = new Mat(4,1,CvType.CV_32FC2);

    obj_corners.put(0, 0, new double[] {0,0});
    obj_corners.put(1, 0, new double[] {img_object.cols(),0});
    obj_corners.put(2, 0, new double[] {img_object.cols(),img_object.rows()});
    obj_corners.put(3, 0, new double[] {0,img_object.rows()});

    //Compute the most probable perspective transformation 
    //out of several pairs of corresponding points.
    //Imgproc.getPerspectiveTransform(obj_corners, scene_corners);
    Core.perspectiveTransform(obj_corners,scene_corners, homography);

    Core.line(img_matches, new Point(scene_corners.get(0,0)), new Point(scene_corners.get(1,0)), new Scalar(0, 255, 0),4);
    Core.line(img_matches, new Point(scene_corners.get(1,0)), new Point(scene_corners.get(2,0)), new Scalar(0, 255, 0),4);
    Core.line(img_matches, new Point(scene_corners.get(2,0)), new Point(scene_corners.get(3,0)), new Scalar(0, 255, 0),4);
    Core.line(img_matches, new Point(scene_corners.get(3,0)), new Point(scene_corners.get(0,0)), new Scalar(0, 255, 0),4);

    Highgui.imwrite("./resources/ImageMatching" + ".jpg", img_matches);
    createWindow("Image Matching", "resources/ImageMatching.jpg");
    } 
    else 
    {
        System.out.println("Not enough Matches");
        System.exit(0);     
    }
}
1个回答

1
坐标已经正确,只是您在错误的图像上绘制。
您的坐标是相对于第二张图片img_scene的。因此,如果您只在该图像上绘制线条,则它们将是正确的。
如果您想在合成图像上绘制线条,其中img_scene向右平移了img_object的宽度,那么您只需要将点x坐标加上img_object.cols()即可。
例如:
Core.line(img_matches, 
          new Point(scene_corners.get(0,0)[0] + img_object.cols(), scene_corners.get(0,0)[1]), 
          new Point(scene_corners.get(1,0)[0] + img_object.cols(), scene_corners.get(1,0)[1]), 
          new Scalar(0, 255, 0),4);

对于第一行,以及接下来的三行,都是相同的。


谢谢。这是当我仅在Img_scene变量上放置轮廓时得到的图像。链接。您可以看到它出现在正确的区域上,但是如果我使用您调整后的代码,那么在scene_corners.get之后显示.x和.y会发生错误。 - Juppal
你的意思是这样吗: Core.line(img_matches, new Point(scene_corners.get(0,0)[0] + img_object.cols(), scene_corners.get(0,0)[0]), new Point(scene_corners.get(1,0)[1] + img_object.cols(), scene_corners.get(1,0)[1]), new Scalar(0, 255, 0),4); - Juppal
差不多,我改的时候没太注意,抱歉 :D。现在检查一下。用[0]代替.x,用[1]代替.y。 - Miki
谢谢你的帮助,Miki,但是我似乎在第四行遇到了一个空指针异常。Core.line(img_matches, new Point(scene_corners.get(3,0)[0] + img_object.cols(), scene_corners.get(3,0)[1]), new Point(scene_corners.get(4,0)[0] + img_object.cols(), scene_corners.get(4,0)[1]), new Scalar(0, 255, 0),4); - Juppal
@Juppal 我无法测试Java代码。这就是为什么我在“猜测”如何做之前,先解释给你需要做什么。你只需要将第一张图片的宽度添加到你的点数中即可。我不知道如何在Java中实现这个操作,但是我给你的代码行应该是正确的(或者像你已经做的那样,有一些打字错误)。你能告诉我哪一行代码出现了错误吗? - Miki
显示剩余2条评论

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接