OpenCV 2.4.1 - 在Python中计算SURF描述符

19
我试图更新我的代码,使用cv2.SURF()而不是cv2.FeatureDetector_create("SURF")cv2.DescriptorExtractor_create("SURF")。但是在检测到关键点后,我无法获得描述符。如何正确地调用SURF.detect
我尝试按照OpenCV文档的说明操作,但有点困惑。文档中写道:
Python: cv2.SURF.detect(img, mask) → keypoints¶
Python: cv2.SURF.detect(img, mask[, descriptors[, useProvidedKeypoints]]) → keypoints, descriptors

如何在进行第二次调用 SURF.detect 时传递关键点?

2个回答

35

我不确定我是否正确理解了你的问题。但是如果你正在寻找匹配SURF关键点的示例,下面是一个非常简单和基本的示例,类似于模板匹配:

import cv2
import numpy as np

# Load the images
img =cv2.imread('messi4.jpg')

# Convert them to grayscale
imgg =cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

# SURF extraction
surf = cv2.SURF()
kp, descritors = surf.detect(imgg,None,useProvidedKeypoints = False)

# Setting up samples and responses for kNN
samples = np.array(descritors)
responses = np.arange(len(kp),dtype = np.float32)

# kNN training
knn = cv2.KNearest()
knn.train(samples,responses)

# Now loading a template image and searching for similar keypoints
template = cv2.imread('template.jpg')
templateg= cv2.cvtColor(template,cv2.COLOR_BGR2GRAY)
keys,desc = surf.detect(templateg,None,useProvidedKeypoints = False)

for h,des in enumerate(desc):
    des = np.array(des,np.float32).reshape((1,128))
    retval, results, neigh_resp, dists = knn.find_nearest(des,1)
    res,dist =  int(results[0][0]),dists[0][0]

    if dist<0.1: # draw matched keypoints in red color
        color = (0,0,255)
    else:  # draw unmatched in blue color
        print dist
        color = (255,0,0)

    #Draw matched key points on original image
    x,y = kp[res].pt
    center = (int(x),int(y))
    cv2.circle(img,center,2,color,-1)

    #Draw matched key points on template image
    x,y = keys[h].pt
    center = (int(x),int(y))
    cv2.circle(template,center,2,color,-1)

cv2.imshow('img',img)
cv2.imshow('tm',template)
cv2.waitKey(0)
cv2.destroyAllWindows()

以下是我得到的结果(使用画图工具将模板图片复制并粘贴到原始图片上):

enter image description here

enter image description here

正如您所看到的,有一些小错误。但对于一个初创公司来说,希望这是可以接受的。


感谢您详细的回复!我已经完成了SURF匹配的完整实现,但是它是使用旧版本的OpenCV完成的。我正在寻找的是这个:surf.detect(imgg,None,useProvidedKeypoints = False)。非常感谢您!真是帮了大忙。 - Kkov
2
使用我的图像与您的代码,我遇到了以下错误:OpenCV 错误: 输入参数的大小不匹配(Response 数组必须包含与样本数量相等的元素数)cvPreprocessOrderedResponses。 - user601836
我在使用OpenCV 2.3.1时遇到了与上述相同的错误:调用knn.train(samples, responses)会引发OpenCV错误:“输入参数的大小不匹配(响应数组必须包含与样本总数相同数量的元素)”。 - Moshe

4

An improvement of the above algorithm is:

import cv2
import numpy

opencv_haystack =cv2.imread('haystack.jpg')
opencv_needle =cv2.imread('needle.jpg')

ngrey = cv2.cvtColor(opencv_needle, cv2.COLOR_BGR2GRAY)
hgrey = cv2.cvtColor(opencv_haystack, cv2.COLOR_BGR2GRAY)

# build feature detector and descriptor extractor
hessian_threshold = 85
detector = cv2.SURF(hessian_threshold)
(hkeypoints, hdescriptors) = detector.detect(hgrey, None, useProvidedKeypoints = False)
(nkeypoints, ndescriptors) = detector.detect(ngrey, None, useProvidedKeypoints = False)

# extract vectors of size 64 from raw descriptors numpy arrays
rowsize = len(hdescriptors) / len(hkeypoints)
if rowsize > 1:
    hrows = numpy.array(hdescriptors, dtype = numpy.float32).reshape((-1, rowsize))
    nrows = numpy.array(ndescriptors, dtype = numpy.float32).reshape((-1, rowsize))
    #print hrows.shape, nrows.shape
else:
    hrows = numpy.array(hdescriptors, dtype = numpy.float32)
    nrows = numpy.array(ndescriptors, dtype = numpy.float32)
    rowsize = len(hrows[0])

# kNN training - learn mapping from hrow to hkeypoints index
samples = hrows
responses = numpy.arange(len(hkeypoints), dtype = numpy.float32)
#print len(samples), len(responses)
knn = cv2.KNearest()
knn.train(samples,responses)

# retrieve index and value through enumeration
for i, descriptor in enumerate(nrows):
    descriptor = numpy.array(descriptor, dtype = numpy.float32).reshape((1, rowsize))
    #print i, descriptor.shape, samples[0].shape
    retval, results, neigh_resp, dists = knn.find_nearest(descriptor, 1)
    res, dist =  int(results[0][0]), dists[0][0]
    #print res, dist

    if dist < 0.1:
        # draw matched keypoints in red color
        color = (0, 0, 255)
    else:
        # draw unmatched in blue color
        color = (255, 0, 0)
    # draw matched key points on haystack image
    x,y = hkeypoints[res].pt
    center = (int(x),int(y))
    cv2.circle(opencv_haystack,center,2,color,-1)
    # draw matched key points on needle image
    x,y = nkeypoints[i].pt
    center = (int(x),int(y))
    cv2.circle(opencv_needle,center,2,color,-1)

cv2.imshow('haystack',opencv_haystack)
cv2.imshow('needle',opencv_needle)
cv2.waitKey(0)
cv2.destroyAllWindows()

You can uncomment the print statements to get a better idea about the data structures used.


你的代码看起来非常有趣,但是对我来说它不起作用。Python解释器说在第27行有一个错误:knn.train(samples, responses). error (-209) Response array must contain as man elements as the total number of samples in function cvPreprocessOrderedResponses. 你有什么想法如何修复它吗?谢谢! - Albert Vonpupp
你现在可以试试了吗?我对代码进行了改进,使它可以与更多类型的特征提取器一起使用。 - pevogam
它没有给出任何错误,但我也没有看到任何输出。很抱歉,我对opencv完全是新手...你能解释一下预期的输出是什么吗?(一个文件、控制台、窗口)。非常感谢! - Albert Vonpupp
嗨,没问题。你可以用这段代码替换Abid Rahman K展示的原始代码。现在我从那里添加了最后的4行,所以只需再次复制整个片段即可。 - pevogam

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接