如何在SwiftUI中显示AVCaptureVideoPreviewLayer

11
在之前的Swift版本中,您可以在ViewController中创建AVCaptureVideoPreviewLayer,并使用view.layer.addSublayer(previewLayer)将其添加到默认视图中。
在SwiftUI ContentView中如何实现呢?在SwiftUI中,似乎没有一个View类型具有addSublayer方法。例如,Text("Hello World").layer.addSublayer...是错误的用法。
我尝试将previewLayer添加到ContentView中的各种视图中。
import Foundation
import AVFoundation
import Combine
import SwiftUI

class Scanner: NSObject, AVCaptureMetadataOutputObjectsDelegate, ObservableObject {

    @Published var captureSession: AVCaptureSession!
    @Published var previewLayer: AVCaptureVideoPreviewLayer!
    @Published var previewView: UIView

    override init() {

        captureSession = AVCaptureSession()
        previewLayer = nil
        //previewView = UIView()

        super.init()

        guard let videoCaptureDevice = AVCaptureDevice.default(for: .video) else { return }
        let videoInput: AVCaptureDeviceInput

        do {
            videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
        } catch {
            return
        }

        if (captureSession.canAddInput(videoInput)) {
            captureSession.addInput(videoInput)
        } else {
            failed()
            return
        }

        let metadataOutput = AVCaptureMetadataOutput()

        if (captureSession.canAddOutput(metadataOutput)) {
            captureSession.addOutput(metadataOutput)

            metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
            metadataOutput.metadataObjectTypes = [.qr]
        } else {
            failed()
            return
        }

        previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        previewLayer.videoGravity = .resizeAspectFill

        //previewView.layer.addSublayer(previewLayer)

    }

import SwiftUI
import Combine


struct ContentView: View {

    @ObservedObject var scanner = Scanner()

    var body: some View {

        //Text("Hello World").layer.addSublayer(scanner.previewLayer)
        //Text("")
        Text("HelloWorld")//.addSublayer(scanner.previewLayer))

            //.previewLayout(scanner.previewLayer)
            .layer.addSublayer(scanner.previewLayer)

            //.previewLayout(scanner.previewLayer)

            //.overlay(scanner.previewView)

        scanner.captureSession.startRunning()

    }
}

编译错误,尝试添加预览层

2个回答

6

您不能直接添加一个图层。这就是为什么人们当前将整个内容像许多其他东西一样封装在UIView(Controller)Representable内部的原因。


6
Peder,我不得不重新创建它,它非常基础...这是链接:https://github.com/owingst/SwiftUIScanner - Tim

4
我能够将捕获的图片放置到SwiftUI视图中。在视图组件中,只需放置一个。
Image(uiImage: cameraManager.capturedImage)

对于CameraManager,使用帧捕获功能,在将样本缓冲区转换为UIImage后,只需将capturedImage设置为uiImage。(参考https://medium.com/ios-os-x-development/ios-camera-frames-extraction-d2c0f80ed05a)

class CameraManager: NSObject, ObservableObject{
    ...
    @Published public var capturedImage: UIImage = UIImage()
    ...
}

extension CameraManager: AVCaptureVideoDataOutputSampleBufferDelegate {

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

        // transforming sample buffer to UIImage
        guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
        let ciImage = CIImage(cvPixelBuffer: imageBuffer)
        let context = CIContext()
        guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return }
        let uiImage = UIImage(cgImage: cgImage)
        
        // publishing changes to the main thread
        DispatchQueue.main.async {
            self.capturedImage = uiImage
        }
    }

}

我原本打算捕获每一帧以供后续图像处理使用,尽管现在可以预览,但我不确定添加计算机视觉算法是否会影响帧捕获。到目前为止,感觉运行模型将在另一个线程上进行... 好的,欢迎评论。

1
这不是很高效。请看我的评论:https://stackoverflow.com/q/74908056/1971013 - meaning-matters
这种方法唯一的问题是内存。当你捕获4K时,图像大小非常大,并且你将在视图中传递和呈现它。由于SwiftUI的本质,每次出现新图像时它都会更新视图。现在想象一下3840x2160需要多少空间。如果你使用CIImageapplyingFilter为每个帧添加额外的过滤器层,它会导致预览时有些卡顿并消耗大量内存。但是通过使用AVCaptureVideoPreviewLayer,即使我们想要预览4k,也不会有这个问题。 - Sajjad Sarkoobi

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接