在图像中查找旋转后的坐标位置 [OpenCV / Python],返回 [x,y] 坐标。

3
我想按顺序以几个角度旋转图像。 我使用cv2.getRotationMatrix2Dcv2.warpAffine完成此操作。 对于一组像素坐标[x,y],其中x = cols,y = rows(在这种情况下),我想在旋转后的图像中找到它们的新坐标。
我使用了以下略有改动的代码http://www.pyimagesearch.com/2017/01/02/rotate-images-correctly-with-opencv-and-python/Affine Transformation的解释,尝试映射旋转后图像中的点:http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/warp_affine/warp_affine.html
问题是我的映射或旋转错误,因为计算得到的变换坐标是错误的。(我尝试手动计算简单验证的角落坐标) 代码:
def rotate_bound(image, angle):
    # grab the dimensions of the image and then determine the
    # center
    (h, w) = image.shape[:2]
    (cX, cY) = ((w-1) // 2.0, (h-1)// 2.0)


# grab the rotation matrix (applying the negative of the
# angle to rotate clockwise), then grab the sine and cosine
# (i.e., the rotation components of the matrix)
M = cv2.getRotationMatrix2D((cX, cY), -angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])

# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
print nW, nH

# adjust the rotation matrix to take into account translation
M[0, 2] += ((nW-1) / 2.0) - cX
M[1, 2] += ((nH-1) / 2.0) - cY

# perform the actual rotation and return the image
return M, cv2.warpAffine(image, M, (nW, nH))

#function that calculates the updated locations of the coordinates
#after rotation
def rotated_coord(points,M):
    points = np.array(points)
    ones = np.ones(shape=(len(points),1))
    points_ones = np.concatenate((points,ones), axis=1)
    transformed_pts = M.dot(points_ones.T).T
    return transformed_pts

#READ IMAGE & CALL FCT
img = cv2.imread("Lenna.png")
points = np.array([[511,  511]])
#rotate by 90 angle for example
M, rotated = rotate_bound(img, 90)
#find out the new locations
transformed_pts = rotated_coord(points,M)

如果我有例如坐标[511,511],当我期望得到[0,511]时,我会得到[-0.5, 511.50]([列,行])。
如果我使用w // 2,则会在图像上添加黑色边框,并且我的旋转更新后的坐标将再次偏离。
问题:如何使用Python找到旋转图像中一对像素坐标的正确位置?
2个回答

6

对于这种图像旋转的情况,其中图像大小在旋转后发生变化,参考点也发生了变化,必须修改变换矩阵。

可以使用以下关系式计算新的宽度和高度:

new.width = h*\sin(\theta) + w*\cos(\theta)

new.height = h*\cos(\theta) + w*\sin(\theta)

由于图像大小发生了变化,因为您可能会看到黑色边框,旋转点(图像中心)的坐标也会发生变化。然后必须在变换矩阵中考虑它。

我在我的博客image rotation bounding box opencv中解释了一个例子。

def rotate_box(bb, cx, cy, h, w):  
    new_bb = list(bb)                                                                                                                                                 
    for i,coord in enumerate(bb):
        # opencv calculates standard transformation matrix                                                                                                            
        M = cv2.getRotationMatrix2D((cx, cy), theta, 1.0)
        # Grab  the rotation components of the matrix)                                                                                                                
        cos = np.abs(M[0, 0])
        sin = np.abs(M[0, 1])                                                                                                                                         
        # compute the new bounding dimensions of the image                                                                                                            
        nW = int((h * sin) + (w * cos))
        nH = int((h * cos) + (w * sin))
        # adjust the rotation matrix to take into account translation
        M[0, 2] += (nW / 2) - cx
        M[1, 2] += (nH / 2) - cy
        # Prepare the vector to be transformed 
        v = [coord[0],coord[1],1]
        # Perform the actual rotation and return the image
        calculated = np.dot(M,v)
        new_bb[i] = (calculated[0],calculated[1]) 
        return new_bb   


 ## Calculate the new bounding box coordinates
 new_bb = {}
 for i in bb1: 
 new_bb[i] = rotate_box(bb1[i], cx, cy, heigth, width)

请帮我理解这里的bb1是什么?(对于bb1中的i: ...)我只知道cx、cy、height、width和angle。 - Ankit Kamboj

2

如果有人需要类似于@ cristianpb上述Python代码的C++代码,以下是相应的代码:

 // send the original angle i.e. don't transform it in radian
        cv::Point2f rotatePointUsingTransformationMat(const cv::Point2f& inPoint, const cv::Point2f& center, const double& rotAngle)
        {
            cv::Mat rot = cv::getRotationMatrix2D(center, rotAngle, 1.0);
            float cos = rot.at<double>(0,0);
            float sin = rot.at<double>(0,1);
            int newWidth = int( ((center.y*2)*sin) +  ((center.x*2)*cos) );
            int newHeight = int( ((center.y*2)*cos) +  ((center.x*2)*sin) );

            rot.at<double>(0,2) += newWidth/2.0 - center.x;
            rot.at<double>(1,2) += newHeight/2.0 - center.y;

            int v[3] = {static_cast<int>(inPoint.x),static_cast<int>(inPoint.y),1};
            int mat3[2][1] = {{0},{0}};

            for(int i=0; i<rot.rows; i++)
            {
                for(int j=0; j<= 0; j++)
                {
                    int sum=0;
                    for(int k=0; k<3; k++)
                    {
                        sum = sum + rot.at<double>(i,k) * v[k];
                    }
                    mat3[i][j] = sum;
                }
            }
            return Point2f(mat3[0][0],mat3[1][0]);
        }

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接