iOS:从前置摄像头捕获图像

15

我正在制作一个应用程序,希望从前置摄像头捕捉图像,而不显示任何截屏。我想完全在代码中拍照,而没有任何用户交互。如何在前置摄像头上实现这一点?


2
你的意思是在不让用户察觉的情况下悄悄地捕捉图像吗? - rid
2
是的,我知道这听起来很糟糕,但它完全无害。这个应用程序会让他们做出滑稽的表情,我想捕捉下来,这样他们就可以看到自己看起来多么傻。 - mtmurdock
1
你实现这样一个功能可能是无害的,但我可以想到很多其他情况下它会产生相反的效果(这也许就是为什么它不可能实现的原因)。 - brettkelly
5个回答

41

如何使用AVFoundation前置摄像头捕获图像:

开发注意事项:

ViewController.h

// Frameworks
#import <CoreVideo/CoreVideo.h>
#import <CoreMedia/CoreMedia.h>
#import <AVFoundation/AVFoundation.h>
#import <UIKit/UIKit.h>

@interface CameraViewController : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate>

// Camera
@property (weak, nonatomic) IBOutlet UIImageView* cameraImageView;
@property (strong, nonatomic) AVCaptureDevice* device;
@property (strong, nonatomic) AVCaptureSession* captureSession;
@property (strong, nonatomic) AVCaptureVideoPreviewLayer* previewLayer;
@property (strong, nonatomic) UIImage* cameraImage;

@end

视图控制器.m

#import "CameraViewController.h"

@implementation CameraViewController

- (void)viewDidLoad
{
    [super viewDidLoad];

    [self setupCamera];
    [self setupTimer];
}

- (void)setupCamera
{    
    NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for(AVCaptureDevice *device in devices)
    {
        if([device position] == AVCaptureDevicePositionFront)
            self.device = device;
    }

    AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:self.device error:nil];
    AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init];
    output.alwaysDiscardsLateVideoFrames = YES;

    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [output setSampleBufferDelegate:self queue:queue];

    NSString* key = (NSString *) kCVPixelBufferPixelFormatTypeKey;
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
    [output setVideoSettings:videoSettings];

    self.captureSession = [[AVCaptureSession alloc] init];
    [self.captureSession addInput:input];
    [self.captureSession addOutput:output];
    [self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto];

    self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
    self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

    // CHECK FOR YOUR APP
    self.previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.height, self.view.frame.size.width);
    self.previewLayer.orientation = AVCaptureVideoOrientationLandscapeRight;
    // CHECK FOR YOUR APP

    [self.view.layer insertSublayer:self.previewLayer atIndex:0];   // Comment-out to hide preview layer

    [self.captureSession startRunning];
}

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);

    CGContextRelease(newContext);
    CGColorSpaceRelease(colorSpace);

    self.cameraImage = [UIImage imageWithCGImage:newImage scale:1.0f orientation:UIImageOrientationDownMirrored];

    CGImageRelease(newImage);

    CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}

- (void)setupTimer
{
    NSTimer* cameraTimer = [NSTimer scheduledTimerWithTimeInterval:2.0f target:self selector:@selector(snapshot) userInfo:nil repeats:YES];
}

- (void)snapshot
{
    NSLog(@"SNAPSHOT");
    self.cameraImageView.image = self.cameraImage;  // Comment-out to hide snapshot
}

@end

将这个连接到带有 UIImageView 快照的 UIViewController 上,它就会工作!快照是在 2.0 秒的时间间隔内以编程方式自动拍摄的,不需要任何用户输入。注释掉选择的行以删除预览层和快照反馈。

如果还有其他问题或意见,请告诉我!


1
非常好!我建议接受这个答案,而不是我的(假设它有效)。 - Tim
1
我不确定,这是我第一次考虑这样的应用程序。我想你需要深入了解细节,并确保用户/苹果知道它不会被用于任何恶意目的(正如其他帖子中提到的)。你的应用听起来很有趣,也很无害,所以也许没问题! - Ricardo RendonCepeda
1
Ricardo,非常感谢您的出色回答。撇开“无预览”方面不谈,这是CoreMedia、AVFoundation等方面的一个很好的例子。再次感谢! - Fattie
有没有人知道如何在这段代码中启用声音?我敢说苹果公司会拒绝这个进程.. - filou
kCGBitmapByteOrder32Little和kCGImageAlphaPremultipliedFirst似乎不再可用。有什么新值吗? - Clay Ellis
显示剩余3条评论

5

将上面的代码转换成 Swift 4

import UIKit
import AVFoundation

class CameraViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {

@IBOutlet weak var cameraImageView: UIImageView!

var device: AVCaptureDevice?
var captureSession: AVCaptureSession?
var previewLayer: AVCaptureVideoPreviewLayer?
var cameraImage: UIImage?

override func viewDidLoad() {
    super.viewDidLoad()

    setupCamera()
    setupTimer()
}

func setupCamera() {
    let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera],
                                                            mediaType: AVMediaType.video,
                                                           position: .front)
    device = discoverySession.devices[0]

    let input: AVCaptureDeviceInput
    do {
        input = try AVCaptureDeviceInput(device: device!)
    } catch {
        return
    }

    let output = AVCaptureVideoDataOutput()
    output.alwaysDiscardsLateVideoFrames = true

    let queue = DispatchQueue(label: "cameraQueue")
    output.setSampleBufferDelegate(self, queue: queue)
    output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: kCVPixelFormatType_32BGRA]

    captureSession = AVCaptureSession()
    captureSession?.addInput(input)
    captureSession?.addOutput(output)
    captureSession?.sessionPreset = AVCaptureSession.Preset.photo

    previewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
    previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
    previewLayer?.frame = CGRect(x: 0.0, y: 0.0, width: view.frame.width, height: view.frame.height)

    view.layer.insertSublayer(previewLayer!, at: 0)

        captureSession?.startRunning()
    }

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
        let baseAddress = UnsafeMutableRawPointer(CVPixelBufferGetBaseAddress(imageBuffer!))
        let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!)
        let width = CVPixelBufferGetWidth(imageBuffer!)
        let height = CVPixelBufferGetHeight(imageBuffer!)

        let colorSpace = CGColorSpaceCreateDeviceRGB()
        let newContext = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo:
        CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)

        let newImage = newContext!.makeImage()
         cameraImage = UIImage(cgImage: newImage!)

        CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
    }

    func setupTimer() {
        _ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(snapshot), userInfo: nil, repeats: true)
    }

    @objc func snapshot() {
        print("SNAPSHOT")
        cameraImageView.image = cameraImage
    }
}

3

如果您想在不显示视频流/图像的情况下捕获它,那么您可能需要使用AVFoundation。与UIImagePickerController不同,它需要一些额外的编写代码来实现。可以查看苹果公司提供的AVCam示例,以帮助您入门。


3

我将上面的代码从Objc转换为Swift 3,如果有人在2017年仍在寻找解决方案。

import UIKit
import AVFoundation

class CameraViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {

@IBOutlet weak var cameraImageView: UIImageView!

var device: AVCaptureDevice?
var captureSession: AVCaptureSession?
var previewLayer: AVCaptureVideoPreviewLayer?
var cameraImage: UIImage?

override func viewDidLoad() {
    super.viewDidLoad()

    setupCamera()
    setupTimer()
}

func setupCamera() {
    let discoverySession = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInWideAngleCamera],
                                                           mediaType: AVMediaTypeVideo,
                                                           position: .front)
    device = discoverySession?.devices[0]

    let input: AVCaptureDeviceInput
    do {
        input = try AVCaptureDeviceInput(device: device)
    } catch {
        return
    }

    let output = AVCaptureVideoDataOutput()
    output.alwaysDiscardsLateVideoFrames = true

    let queue = DispatchQueue(label: "cameraQueue")
    output.setSampleBufferDelegate(self, queue: queue)
    output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable: kCVPixelFormatType_32BGRA]

    captureSession = AVCaptureSession()
    captureSession?.addInput(input)
    captureSession?.addOutput(output)
    captureSession?.sessionPreset = AVCaptureSessionPresetPhoto

    previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
    previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill

    previewLayer?.frame = CGRect(x: 0.0, y: 0.0, width: view.frame.width, height: view.frame.height)

    view.layer.insertSublayer(previewLayer!, at: 0)

    captureSession?.startRunning()
}

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
    let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: .allZeros))
    let baseAddress = UnsafeMutableRawPointer(CVPixelBufferGetBaseAddress(imageBuffer!))
    let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!)
    let width = CVPixelBufferGetWidth(imageBuffer!)
    let height = CVPixelBufferGetHeight(imageBuffer!)

    let colorSpace = CGColorSpaceCreateDeviceRGB()
    let newContext = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo:
        CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)

    let newImage = newContext!.makeImage()
    cameraImage = UIImage(cgImage: newImage!)

    CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: .allZeros))
}

func setupTimer() {
    _ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(snapshot), userInfo: nil, repeats: true)
}

func snapshot() {
    print("SNAPSHOT")
    cameraImageView.image = cameraImage
}
}

此外,我找到了一种更简单的方法来从CMSampleBuffer中获取图像:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
    let myPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    let myCIimage = CIImage(cvPixelBuffer: myPixelBuffer!)
    let videoImage = UIImage(ciImage: myCIimage)
    cameraImage = videoImage
}

没问题,我很高兴它有用,不过我不确定在Swift 4中是否仍然能够正常运行而没有弹出警告。 - Mihai Erős
不仅仅是警告,有些东西需要改变,但大多数修复工具都可以处理它。 - Dan Rosenstark

0
在UIImagePickerController类的文档中有一个名为takePicture的方法。它说:
与自定义覆盖视图一起使用此方法,以启动静态图像的编程捕获。这支持在不离开界面的情况下拍摄多张照片,但需要隐藏默认的图像选择器控件。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接