我正在使用OpenCV 2.2在iPhone上检测人脸。 我正在使用IOS 4的AVCaptureSession来获取相机流,如下面的代码所示。
我的挑战是视频帧以CVBufferRef对象的形式进入(指向CVImageBuffer),它们以横向480像素,纵向300像素的方式定向。 如果您将手机纵向拿着,则很好,但当手机处于竖直位置时,我希望将这些帧顺时针旋转90度,以便OpenCV可以正确找到人脸。
我可以将CVBufferRef转换为CGImage,然后转换为UIImage并旋转,就像这个人所做的那样:Rotate CGImage taken from video frame
但是这会浪费很多CPU。 如果可能的话,我正在寻找一种更快速地旋转进入的图像的方法,最好使用GPU进行此处理。
有任何想法吗?
Ian
代码示例:
-(void) startCameraCapture {
// Start up the face detector
faceDetector = [[FaceDetector alloc] initWithCascade:@"haarcascade_frontalface_alt2" withFileExtension:@"xml"];
// Create the AVCapture Session
session = [[AVCaptureSession alloc] init];
// create a preview layer to show the output from the camera
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.frame = previewView.frame;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[previewView.layer addSublayer:previewLayer];
// Get the default camera device
AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a AVCaptureInput with the camera device
NSError *error=nil;
AVCaptureInput* cameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:&error];
if (cameraInput == nil) {
NSLog(@"Error to create camera capture:%@",error);
}
// Set the output
AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init];
videoOutput.alwaysDiscardsLateVideoFrames = YES;
// create a queue besides the main thread queue to run the capture on
dispatch_queue_t captureQueue = dispatch_queue_create("catpureQueue", NULL);
// setup our delegate
[videoOutput setSampleBufferDelegate:self queue:captureQueue];
// release the queue. I still don't entirely understand why we're releasing it here,
// but the code examples I've found indicate this is the right thing. Hmm...
dispatch_release(captureQueue);
// configure the pixel format
videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
(id)kCVPixelBufferPixelFormatTypeKey,
nil];
// and the size of the frames we want
// try AVCaptureSessionPresetLow if this is too slow...
[session setSessionPreset:AVCaptureSessionPresetMedium];
// If you wish to cap the frame rate to a known value, such as 10 fps, set
// minFrameDuration.
videoOutput.minFrameDuration = CMTimeMake(1, 10);
// Add the input and output
[session addInput:cameraInput];
[session addOutput:videoOutput];
// Start the session
[session startRunning];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// only run if we're not already processing an image
if (!faceDetector.imageNeedsProcessing) {
// Get CVImage from sample buffer
CVImageBufferRef cvImage = CMSampleBufferGetImageBuffer(sampleBuffer);
// Send the CVImage to the FaceDetector for later processing
[faceDetector setImageFromCVPixelBufferRef:cvImage];
// Trigger the image processing on the main thread
[self performSelectorOnMainThread:@selector(processImage) withObject:nil waitUntilDone:NO];
}
}
vImageRotate...
函数原型已更改,我的调用看起来像vImageRotate90_ARGB8888(&inbuff, &outbuff, rotationConstant, bgColor, 0);
(其中uint8_t bgColor[4] = {0, 0, 0, 0};
)。您还需要手动创建一个CVPixelBufferRef
,以便将结果图像数据传递给AVAssetWriterInputPixelBufferAdaptor
。只要不要忘记创建一个CVPixelBufferReleaseBytesCallback
来释放在此函数中分配的数据缓冲区。 - Mr. T