我正在尝试编写一个程序,它接收UIImage并返回一个只包含脸的新UIImage。这似乎非常简单,但我的大脑无法理解CoreImage和UIImage之间的差异。
以下是基本信息:
返回的图像是翻转后的图像。我尝试使用以下代码调整
以下是基本信息:
- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
return newImage;
}
-(UIImage *)getFaceImage:(UIImage *)picture {
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:[NSDictionary dictionaryWithObject: CIDetectorAccuracyHigh forKey: CIDetectorAccuracy]];
CIImage *ciImage = [CIImage imageWithCGImage: [picture CGImage]];
NSArray *features = [detector featuresInImage:ciImage];
// For simplicity, I'm grabbing the first one in this code sample,
// and we can all pretend that the photo has one face for sure. :-)
CIFaceFeature *faceFeature = [features objectAtIndex:0];
return imageFromImage:picture inRect:faceFeature.bounds;
}
返回的图像是翻转后的图像。我尝试使用以下代码调整
faceFeature.bounds
:CGAffineTransform t = CGAffineTransformMakeScale(1.0f,-1.0f);
CGRect newRect = CGRectApplyAffineTransform(faceFeature.bounds,t);
... 但是这给我带来了超出图像范围的结果。
我相信有一些简单的方法可以解决这个问题,但除了计算底向下,然后创建一个新的矩形,将其用作X,是否有“正确”的方法来解决这个问题呢?
谢谢!