iOS人脸检测器方向和CIImage方向设置

3

编辑发现这段代码有助于处理前置摄像头图像:http://blog.logichigh.com/2008/06/05/uiimage-fix/

希望其他有类似问题的人可以帮我。目前还没有找到解决方案。(这可能看起来有点长,但只是一堆辅助代码)

我正在使用 iOS 人脸检测器处理从相机(前置和后置)获取的图像以及从库中选择的图像(我使用 UIImagePicker - 用于通过相机拍摄和从库中选择图片 - 不像在 squarecam 演示中使用 avfoundation 拍照)

我得到了非常混乱的检测坐标(如果有的话),因此我编写了一个简短的调试方法来获取人脸范围以及一个绘制正方形的实用程序来覆盖它们,并且我想检查检测器工作的方向:

#define RECTBOX(R)   [NSValue valueWithCGRect:R]
- (NSArray *)detectFaces:(UIImage *)inputimage
{
    _detector = \[CIDetector detectorOfType:CIDetectorTypeFace context:nil options:\[NSDictionary dictionaryWithObject:CIDetectorAccuracyLow forKey:CIDetectorAccuracy\]\];
    NSNumber *orientation = \[NSNumber numberWithInt:\[inputimage imageOrientation\]\]; // i also saw code where they add +1 to the orientation
    NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\];

    CIImage* ciimage = \[CIImage imageWithCGImage:inputimage.CGImage options:imageOptions\];


    // try like this first
    //    NSArray* features = \[self.detector featuresInImage:ciimage options:imageOptions\];
    // if not working go on to this (trying all orientations)
    NSArray* features;

    int exif;
    // ios face detector. trying all of the orientations
    for (exif = 1; exif <= 8 ; exif++)
    {
        NSNumber *orientation = \[NSNumber numberWithInt:exif\];

        NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\];

        NSTimeInterval start = \[NSDate timeIntervalSinceReferenceDate\];

        features = \[self.detector featuresInImage:ciimage options:imageOptions\];

        if (features.count > 0)
        {
            NSString *str = \[NSString stringWithFormat:@"found faces using exif %d",exif\];
                    \[faceDetection log:str\];
            break;
        }
        NSTimeInterval duration = \[NSDate timeIntervalSinceReferenceDate\] - start;
        NSLog(@"faceDetection: facedetection total runtime is %f s",duration);
    }
    if (features.count > 0)
    {
        [faceDetection log:@"-I- Found faces with ios face detector"];
        for(CIFaceFeature *feature in features)
        {
            CGRect rect = feature.bounds;
            CGRect r = CGRectMake(rect.origin.x,inputimage.size.height - rect.origin.y - rect.size.height,rect.size.width,rect.size.height);
            [returnArray addObject:RECTBOX(r)];
        }
        return returnArray;
    } else {
        // no faces from iOS face detector. try OpenCV detector
    }

[1]

在尝试了许多不同的图片后,我注意到人脸检测器的方向与相机图像属性不一致。我从前置摄像头拍摄了大量照片,其中uiimage的方向为3(查询imageOrienation),但是面部检测器没有找到该设置下的面部。当运行所有exif可能性时,面部检测器最终会选择完全不同的方向来捕捉面部。

![1]: http://i.stack.imgur.com/D7bkZ.jpg

我该怎么解决这个问题?我的代码有错误吗?

我遇到的另一个问题(但与面部检测器密切相关)是,当面部检测器捕捉到面部,但是“错误”方向(主要发生在前置摄像头上),最初使用的UIImage在uiimageview中正确显示,但是当我绘制正方形覆盖层(我在应用程序中使用opencv,所以我决定将UIImage转换为cvmat以使用opencv绘制覆盖层)整个图像旋转90度(仅cvmat图像而不是我最初显示的UIImage

我能想到的推理是,面部检测器正在干扰UIimage转换为opencv mat正在使用的某个缓冲区(上下文?)。我该如何分离这些缓冲区?

将uiimage转换为cvmat的代码(来自“著名”的UIImage类别):

-(cv::Mat)CVMat
{

    CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage);
    CGFloat cols = self.size.width;
    CGFloat rows = self.size.height;

    cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels

    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
                                                    cols, // Width of bitmap
                                                    rows, // Height of bitmap
                                                    8, // Bits per component
                                                    cvMat.step[0], // Bytes per row
                                                    colorSpace, // Colorspace
                                                    kCGImageAlphaNoneSkipLast |
                                                    kCGBitmapByteOrderDefault); // Bitmap info flags

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
    CGContextRelease(contextRef);

    return cvMat;
}

- (id)initWithCVMat:(const cv::Mat&)cvMat
{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];

    CGColorSpaceRef colorSpace;

    if (cvMat.elemSize() == 1)
    {
        colorSpace = CGColorSpaceCreateDeviceGray();
    }
    else
    {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                     // Width
                                            cvMat.rows,                                     // Height
                                            8,                                              // Bits per component
                                            8 * cvMat.elemSize(),                           // Bits per pixel
                                            cvMat.step[0],                                  // Bytes per row
                                            colorSpace,                                     // Colorspace
                                            kCGImageAlphaNone | kCGBitmapByteOrderDefault,  // Bitmap info flags
                                            provider,                                       // CGDataProviderRef
                                            NULL,                                           // Decode
                                            false,                                          // Should interpolate
                                            kCGRenderingIntentDefault);                     // Intent   

     self = [self initWithCGImage:imageRef];
     CGImageRelease(imageRef);
     CGDataProviderRelease(provider);
     CGColorSpaceRelease(colorSpace);

     return self;
 }  

 -(cv::Mat)CVRgbMat
 {
     cv::Mat tmpimage = self.CVMat;
     cv::Mat image;
     cvtColor(tmpimage, image, cv::COLOR_BGRA2BGR);
     return image;
 }

 - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)img editingInfo:(NSDictionary *)editInfo {
    self.prevImage = img;
 //   self.previewView.image = img;
    NSArray *arr = [[faceDetection sharedFaceDetector] detectFaces:img];
    for (id r in arr)
    {
         CGRect rect = RECTUNBOX(r);
         //self.previewView.image = img;
         self.previewView.image = [utils drawSquareOnImage:img square:rect];
    }
    [self.imgPicker dismissModalViewControllerAnimated:YES];
    return;
}

在运行人脸检测器之前,请使用此代码,您就不会遇到方向问题了。 - Avba
2个回答

4
我认为旋转整个图像像素并匹配CIFaceFeature不是一个好主意。你可以想象在旋转方向上重新绘制非常耗费资源。我曾经遇到过同样的问题,通过将CIFaceFeature的坐标系与UIImageOrientation相对应来解决它。我扩展了CIFaceFeature类,并添加了一些转换方法,以便根据UIImage及其UIImageView(或UIView的CALayer)获取正确的点位置和边界。完整的实现已发布在这里:https://gist.github.com/laoyang/5747004。您可以直接使用。

这里是从CIFaceFeature转换点的最基本的转换,返回的CGPoint是基于图像的方向进行转换的:

- (CGPoint) pointForImage:(UIImage*) image fromPoint:(CGPoint) originalPoint {

    CGFloat imageWidth = image.size.width;
    CGFloat imageHeight = image.size.height;

    CGPoint convertedPoint;

    switch (image.imageOrientation) {
        case UIImageOrientationUp:
            convertedPoint.x = originalPoint.x;
            convertedPoint.y = imageHeight - originalPoint.y;
            break;
        case UIImageOrientationDown:
            convertedPoint.x = imageWidth - originalPoint.x;
            convertedPoint.y = originalPoint.y;
            break;
        case UIImageOrientationLeft:
            convertedPoint.x = imageWidth - originalPoint.y;
            convertedPoint.y = imageHeight - originalPoint.x;
            break;
        case UIImageOrientationRight:
            convertedPoint.x = originalPoint.y;
            convertedPoint.y = originalPoint.x;
            break;
        case UIImageOrientationUpMirrored:
            convertedPoint.x = imageWidth - originalPoint.x;
            convertedPoint.y = imageHeight - originalPoint.y;
            break;
        case UIImageOrientationDownMirrored:
            convertedPoint.x = originalPoint.x;
            convertedPoint.y = originalPoint.y;
            break;
        case UIImageOrientationLeftMirrored:
            convertedPoint.x = imageWidth - originalPoint.y;
            convertedPoint.y = originalPoint.x;
            break;
        case UIImageOrientationRightMirrored:
            convertedPoint.x = originalPoint.y;
            convertedPoint.y = imageHeight - originalPoint.x;
            break;
        default:
            break;
    }
    return convertedPoint;
}

根据上述转换,这里是基于类别的方法:

// Get converted features with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image;
- (CGPoint) rightEyePositionForImage:(UIImage *)image;
- (CGPoint) mouthPositionForImage:(UIImage *)image;
- (CGRect) boundsForImage:(UIImage *)image;

// Get normalized features (0-1) with respect to the imageOrientation property
- (CGPoint) normalizedLeftEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedRightEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedMouthPositionForImage:(UIImage *)image;
- (CGRect) normalizedBoundsForImage:(UIImage *)image;

// Get feature location inside of a given UIView size with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) rightEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) mouthPositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGRect) boundsForImage:(UIImage *)image inView:(CGSize)viewSize;

需要注意的另一件事是,在从UIImage方向提取面部特征时,要指定正确的EXIF方向。这相当令人困惑...以下是我所做的:

int exifOrientation;
switch (self.image.imageOrientation) {
    case UIImageOrientationUp:
        exifOrientation = 1;
        break;
    case UIImageOrientationDown:
        exifOrientation = 3;
        break;
    case UIImageOrientationLeft:
        exifOrientation = 8;
        break;
    case UIImageOrientationRight:
        exifOrientation = 6;
        break;
    case UIImageOrientationUpMirrored:
        exifOrientation = 2;
        break;
    case UIImageOrientationDownMirrored:
        exifOrientation = 4;
        break;
    case UIImageOrientationLeftMirrored:
        exifOrientation = 5;
        break;
    case UIImageOrientationRightMirrored:
        exifOrientation = 7;
        break;
    default:
        break;
}

NSDictionary *detectorOptions = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh };
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];

NSArray *features = [faceDetector featuresInImage:[CIImage imageWithCGImage:self.image.CGImage]
                                          options:@{CIDetectorImageOrientation:[NSNumber numberWithInt:exifOrientation]}];

)


0

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接