如何在iOS Swift中为三个自定义UIView添加相机预览视图

3
我需要创建一个带视频处理功能的应用程序。
我的要求是必须创建3个视图,其中包括相机预览层。第一个视图应显示原始捕获视频,第二个应显示原始捕获视频的翻转,最后一个视图应显示原始捕获视频的反色。
我已经开始按照这个要求进行开发。首先,我创建了3个视图和相机捕获所需的属性。
    @IBOutlet weak var captureView: UIView!
    @IBOutlet weak var flipView: UIView!
    @IBOutlet weak var InvertView: UIView!
    
    //Camera Capture requiered properties
    var videoDataOutput: AVCaptureVideoDataOutput!
    var videoDataOutputQueue: DispatchQueue!
    var previewLayer:AVCaptureVideoPreviewLayer!
    var captureDevice : AVCaptureDevice!
    let session = AVCaptureSession()
    var replicationLayer: CAReplicatorLayer!

enter image description here

现在我调用了AVCaptureVideoDataOutputSampleBufferDelegate来启动相机会话。
extension ViewController:  AVCaptureVideoDataOutputSampleBufferDelegate{
    func setupAVCapture(){
        session.sessionPreset = AVCaptureSessionPreset640x480
        guard let device = AVCaptureDevice
            .defaultDevice(withDeviceType: .builtInWideAngleCamera,
                           mediaType: AVMediaTypeVideo,
                           position: .back) else{
                            return
        }
        captureDevice = device
        beginSession()
    }
    
    func beginSession(){
        var err : NSError? = nil
        var deviceInput:AVCaptureDeviceInput?
        do {
            deviceInput = try AVCaptureDeviceInput(device: captureDevice)
        } catch let error as NSError {
            err = error
            deviceInput = nil
        }
        if err != nil {
            print("error: \(err?.localizedDescription)");
        }
        if self.session.canAddInput(deviceInput){
            self.session.addInput(deviceInput);
        }
        
        videoDataOutput = AVCaptureVideoDataOutput()
        videoDataOutput.alwaysDiscardsLateVideoFrames=true
        videoDataOutputQueue = DispatchQueue(label: "VideoDataOutputQueue")
        videoDataOutput.setSampleBufferDelegate(self, queue:self.videoDataOutputQueue)
        if session.canAddOutput(self.videoDataOutput){
            session.addOutput(self.videoDataOutput)
        }
        videoDataOutput.connection(withMediaType: AVMediaTypeVideo).isEnabled = true
        
        self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
        self.previewLayer.frame = self.captureView.bounds
        self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspect
        
        self.replicationLayer = CAReplicatorLayer()
        self.replicationLayer.frame = self.captureView.bounds
        self.replicationLayer.instanceCount = 1 //
        self.replicationLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.captureView.bounds.size.height / 1, 0.0)
        
        self.replicationLayer.addSublayer(self.previewLayer)
        self.captureView.layer.addSublayer(self.replicationLayer)
        self.flipView.layer.addSublayer(self.replicationLayer)
        self.InvertView.layer.addSublayer(self.replicationLayer)
        
        session.startRunning()
    }
    
    func captureOutput(_ captureOutput: AVCaptureOutput!,
                       didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
                       from connection: AVCaptureConnection!) {
        // do stuff here
    }
    
    // clean up AVCapture
    func stopCamera(){
        session.stopRunning()
    }
    
}

我在这里使用CAReplicatorLayer来展示在3个视图中捕获视频。我将self.replicationLayer.instanceCount指定为1,然后得到了如下输出。

enter image description here

如果我将self.replicationLayer.instanceCount指定为3,则输出如下。

enter image description here

请指导我如何以三种不同的视图显示捕捉视频,并提供一些将原始捕捉视频转换为翻转和反色的想法。谢谢。

1个回答

3
最终,我在JohnnySlagle/Multiple-Camera-Feeds代码的帮助下找到了答案。
我创建了三个视图,如下:
@property (weak, nonatomic) IBOutlet UIView *video1;
@property (weak, nonatomic) IBOutlet UIView *video2;
@property (weak, nonatomic) IBOutlet UIView *video3;

稍微修改setUpFeedViews函数。
- (void)setupFeedViews {
    NSUInteger numberOfFeedViews = 3;

    for (NSUInteger i = 0; i < numberOfFeedViews; i++) {
        VideoFeedView *feedView = [self setupFeedViewWithFrame:CGRectMake(0, 0, self.video1.frame.size.width, self.video1.frame.size.height)];
        feedView.tag = i+1;
        switch (i) {
            case 0:
                [self.video1 addSubview:feedView];
                break;
            case 1:
                [self.video2 addSubview:feedView];
                break;
            case 2:
                [self.video3 addSubview:feedView];
                break;
            default:
                break;
        }
        [self.feedViews addObject:feedView];
    }
}

然后在AVCaptureVideoDataOutputSampleBufferDelegate中应用筛选器

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);

    // update the video dimensions information
    _currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];

    CGRect sourceExtent = sourceImage.extent;

    CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;


    for (VideoFeedView *feedView in self.feedViews) {
        CGFloat previewAspect = feedView.viewBounds.size.width  / feedView.viewBounds.size.height;
        // we want to maintain the aspect radio of the screen size, so we clip the video image
        CGRect drawRect = sourceExtent;
        if (sourceAspect > previewAspect) {
            // use full height of the video image, and center crop the width
            drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
            drawRect.size.width = drawRect.size.height * previewAspect;
        } else {
            // use full width of the video image, and center crop the height
            drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
            drawRect.size.height = drawRect.size.width / previewAspect;
        }
        [feedView bindDrawable];

        if (_eaglContext != [EAGLContext currentContext]) {
            [EAGLContext setCurrentContext:_eaglContext];
        }

        // clear eagl view to grey
        glClearColor(0.5, 0.5, 0.5, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);

        // set the blend mode to "source over" so that CI will use that
        glEnable(GL_BLEND);
        glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        // This is necessary for non-power-of-two textures
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

        if (feedView.tag == 1) {
            if (sourceImage) {
                [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        } else if (feedView.tag == 2) {
            sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeScale(1, -1)];
            sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeTranslation(0, sourceExtent.size.height)];
            if (sourceImage) {
                [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        } else {
            CIFilter *effectFilter = [CIFilter filterWithName:@"CIColorInvert"];
            [effectFilter setValue:sourceImage forKey:kCIInputImageKey];
            CIImage *invertImage = [effectFilter outputImage];
            if (invertImage) {
                [_ciContext drawImage:invertImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        }
        [feedView display];
    }
}

就是这样。成功地满足了我的要求。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接