持有CVImageBuffer中的MTLTexture会导致卡顿问题。

20

我使用CVMetalTextureCacheCreateTextureFromImage从相机和播放器的CVImageBuffer中创建一个MTLTexture,然后使用CVMetalTextureGetTexture获取MTLTexture。但是,在使用Metal渲染纹理时,偶尔会看到视频帧乱序(在视觉上前后跳动),可能是因为CoreVideo正在修改底层的CVImageBuffer存储并且MTLTexture只是指向那里。

是否有方法使CoreVideo不触及该缓冲区,并使用其池中的另一个缓冲区,直到我释放MTLTexture对象?

我的当前解决方法是使用MTLBlitCommandEncoder复制纹理,但由于我只需要保留纹理约30毫秒,因此这似乎不必要。


你是否在使用 Metal 纹理时一直保持对 CVMetalTexture 的强引用,直到你完成操作?还是只对 MTLTexture 对象保持强引用? - Ken Thomases
我只是因为一些实现细节而持有对MTLTexture的强引用。持有CVMetalTextureCVImageBuffer对象会解决我的问题吗? - Blixt
我不确定,可能会。这只是我的猜测。如果你可以轻松尝试,那就应该试一下。 :) - Ken Thomases
也许是苹果TSI? - Rhythmic Fistman
4个回答

13

最近我遇到了完全相同的问题。问题在于,除非其拥有的CVMetalTextureRef仍然存在,否则MTLTexture无效。您必须在使用MTLTexture期间始终保持对CVMetalTextureRef的引用(一直到当前渲染周期的结束)。


这是在Metal下成功地从CMSampleBufferRefs进行纹理处理的关键,谢谢! - C0C0AL0C0
C0C0AL0C0,我认为这也是苹果示例代码MetalVideoCapture中屏幕撕裂问题的解决方案?https://stackoverflow.com/questions/38879518/screen-tearing-and-camera-capture-with-metal(因为您曾回答过该问题,但现在已删除) - Gary
保持一个CVMetalTextureRef数组解决了问题,但会导致内存泄漏,从而导致低内存崩溃。如何解决这个问题? - arifcse10
谢谢jasongregori! - Manifestor

3
我遇到了同样的问题,但在我的情况下,多一个对CVMetalTexture对象的引用并不能解决这个问题。
据我所知,只有当相机在处理上一帧时收到新帧时才会出现这种情况。
看起来,CVMetalTextureCacheCreateTextureFromImage仅在相机输入数据的像素缓冲区上创建纹理。因此,从异步的Metal代码中访问它会导致一些问题。
我决定创建MTLTexture的副本(它也是异步的但足够快)。
以下是CVMetalTextureCacheCreateTextureFromImage()的描述:
“此函数根据指定条件创建或返回缓存的CoreVideo Metal纹理缓冲区,将基于设备的图像缓冲区与MTLTexture对象进行实时绑定。”

1
你能详细说明一下你是如何复制并反馈的吗? - Gizmodo
有关如何实现这一点的任何示例/伪代码? - JCutting8

-1

问题可能是由于您的相机输入引起的。如果您的镜头拍摄速率与您预期的输出速率不完全相同,则帧速率不匹配会导致奇怪的幽灵效果。尝试禁用自动调整帧速率。

此问题的其他原因可能是以下原因之一:

临界速度:有些速度与帧速率同步,因此会导致卡顿。帧速率越低,问题就越明显。

子像素插值:还有其他情况,即帧之间的子像素插值会导致细节区域在帧之间闪烁。

成功渲染的解决方案是使用适合您帧速率的正确速度(每秒像素数),添加足够的运动模糊来隐藏问题,或减少图像中的细节量。


输入不应该是问题,因为如果我在回调中复制缓冲区,一切都很好。问题只会在我从缓冲区获取MTLTexture并稍后尝试渲染它时(在回调之外)才会显现出来。我在提供给我的视频数据中看不到任何伪影。 - Blixt

-1

看起来你的问题取决于如何管理会话以获取原始相机数据。

我认为你可以深入实时分析相机会话,使用这个类(MetalCameraSession)了解你的会话当前状态:

import AVFoundation
import Metal
public protocol MetalCameraSessionDelegate {
    func metalCameraSession(_ session: MetalCameraSession, didReceiveFrameAsTextures: [MTLTexture], withTimestamp: Double)
    func metalCameraSession(_ session: MetalCameraSession, didUpdateState: MetalCameraSessionState, error: MetalCameraSessionError?)
}
public final class MetalCameraSession: NSObject {
    public var frameOrientation: AVCaptureVideoOrientation? {
        didSet {
            guard
                let frameOrientation = frameOrientation,
                let outputData = outputData,
                outputData.connection(withMediaType: AVMediaTypeVideo).isVideoOrientationSupported
            else { return }

            outputData.connection(withMediaType: AVMediaTypeVideo).videoOrientation = frameOrientation
        }
    }
    public let captureDevicePosition: AVCaptureDevicePosition
    public var delegate: MetalCameraSessionDelegate?
    public let pixelFormat: MetalCameraPixelFormat
    public init(pixelFormat: MetalCameraPixelFormat = .rgb, captureDevicePosition: AVCaptureDevicePosition = .back, delegate: MetalCameraSessionDelegate? = nil) {
        self.pixelFormat = pixelFormat
        self.captureDevicePosition = captureDevicePosition
        self.delegate = delegate
        super.init();
        NotificationCenter.default.addObserver(self, selector: #selector(captureSessionRuntimeError), name: NSNotification.Name.AVCaptureSessionRuntimeError, object: nil)
    }
    public func start() {
        requestCameraAccess()
        captureSessionQueue.async(execute: {
            do {
                self.captureSession.beginConfiguration()
                try self.initializeInputDevice()
                try self.initializeOutputData()
                self.captureSession.commitConfiguration()
                try self.initializeTextureCache()
                self.captureSession.startRunning()
                self.state = .streaming
            }
            catch let error as MetalCameraSessionError {
                self.handleError(error)
            }
            catch {
                print(error.localizedDescription)
            }
        })
    }
    public func stop() {
        captureSessionQueue.async(execute: {
            self.captureSession.stopRunning()
            self.state = .stopped
        })
    }
    fileprivate var state: MetalCameraSessionState = .waiting {
        didSet {
            guard state != .error else { return }

            delegate?.metalCameraSession(self, didUpdateState: state, error: nil)
        }
    }
    fileprivate var captureSession = AVCaptureSession()
    internal var captureDevice = MetalCameraCaptureDevice()
    fileprivate var captureSessionQueue = DispatchQueue(label: "MetalCameraSessionQueue", attributes: [])
#if arch(i386) || arch(x86_64)
#else
    /// Texture cache we will use for converting frame images to textures
    internal var textureCache: CVMetalTextureCache?
#endif
    fileprivate var metalDevice = MTLCreateSystemDefaultDevice()
    internal var inputDevice: AVCaptureDeviceInput? {
        didSet {
            if let oldValue = oldValue {
                captureSession.removeInput(oldValue)
            }
            captureSession.addInput(inputDevice)
        }
    }
    internal var outputData: AVCaptureVideoDataOutput? {
        didSet {
            if let oldValue = oldValue {
                captureSession.removeOutput(oldValue)
            }
            captureSession.addOutput(outputData)
        }
    }
    fileprivate func requestCameraAccess() {
        captureDevice.requestAccessForMediaType(AVMediaTypeVideo) {
            (granted: Bool) -> Void in
            guard granted else {
                self.handleError(.noHardwareAccess)
                return
            }

            if self.state != .streaming && self.state != .error {
                self.state = .ready
            }
        }
    }
    fileprivate func handleError(_ error: MetalCameraSessionError) {
        if error.isStreamingError() {
            state = .error
        }

        delegate?.metalCameraSession(self, didUpdateState: state, error: error)
    }
    fileprivate func initializeTextureCache() throws {
#if arch(i386) || arch(x86_64)
        throw MetalCameraSessionError.failedToCreateTextureCache
#else
        guard
            let metalDevice = metalDevice,
            CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &textureCache) == kCVReturnSuccess
        else {
            throw MetalCameraSessionError.failedToCreateTextureCache
        }
#endif
    }
    fileprivate func initializeInputDevice() throws {
        var captureInput: AVCaptureDeviceInput!
        guard let inputDevice = captureDevice.device(mediaType: AVMediaTypeVideo, position: captureDevicePosition) else {
            throw MetalCameraSessionError.requestedHardwareNotFound
        }
        do {
            captureInput = try AVCaptureDeviceInput(device: inputDevice)
        }
        catch {
            throw MetalCameraSessionError.inputDeviceNotAvailable
        }
        guard captureSession.canAddInput(captureInput) else {
            throw MetalCameraSessionError.failedToAddCaptureInputDevice
        }
        self.inputDevice = captureInput
    }
    fileprivate func initializeOutputData() throws {
        let outputData = AVCaptureVideoDataOutput()

        outputData.videoSettings = [
            kCVPixelBufferPixelFormatTypeKey as AnyHashable : Int(pixelFormat.coreVideoType)
        ]
        outputData.alwaysDiscardsLateVideoFrames = true
        outputData.setSampleBufferDelegate(self, queue: captureSessionQueue)

        guard captureSession.canAddOutput(outputData) else {
            throw MetalCameraSessionError.failedToAddCaptureOutput
        }

        self.outputData = outputData
    }
    @objc
    fileprivate func captureSessionRuntimeError() {
        if state == .streaming {
            handleError(.captureSessionRuntimeError)
        }
    }
    deinit {
        NotificationCenter.default.removeObserver(self)
    }
}
extension MetalCameraSession: AVCaptureVideoDataOutputSampleBufferDelegate {
#if arch(i386) || arch(x86_64)
#else
    private func texture(sampleBuffer: CMSampleBuffer?, textureCache: CVMetalTextureCache?, planeIndex: Int = 0, pixelFormat: MTLPixelFormat = .bgra8Unorm) throws -> MTLTexture {
        guard let sampleBuffer = sampleBuffer else {
            throw MetalCameraSessionError.missingSampleBuffer
        }
        guard let textureCache = textureCache else {
            throw MetalCameraSessionError.failedToCreateTextureCache
        }
        guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
            throw MetalCameraSessionError.failedToGetImageBuffer
        }
        let isPlanar = CVPixelBufferIsPlanar(imageBuffer)
        let width = isPlanar ? CVPixelBufferGetWidthOfPlane(imageBuffer, planeIndex) : CVPixelBufferGetWidth(imageBuffer)
        let height = isPlanar ? CVPixelBufferGetHeightOfPlane(imageBuffer, planeIndex) : CVPixelBufferGetHeight(imageBuffer)
        var imageTexture: CVMetalTexture?
        let result = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, imageBuffer, nil, pixelFormat, width, height, planeIndex, &imageTexture)
        guard
            let unwrappedImageTexture = imageTexture,
            let texture = CVMetalTextureGetTexture(unwrappedImageTexture),
            result == kCVReturnSuccess
        else {
            throw MetalCameraSessionError.failedToCreateTextureFromImage
        }
        return texture
    }
    private func timestamp(sampleBuffer: CMSampleBuffer?) throws -> Double {
        guard let sampleBuffer = sampleBuffer else {
            throw MetalCameraSessionError.missingSampleBuffer
        }

        let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)

        guard time != kCMTimeInvalid else {
            throw MetalCameraSessionError.failedToRetrieveTimestamp
        }

        return (Double)(time.value) / (Double)(time.timescale);
    }
    @objc public func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
        do {
            var textures: [MTLTexture]!

            switch pixelFormat {
            case .rgb:
                let textureRGB = try texture(sampleBuffer: sampleBuffer, textureCache: textureCache)
                textures = [textureRGB]
            case .yCbCr:
                let textureY = try texture(sampleBuffer: sampleBuffer, textureCache: textureCache, planeIndex: 0, pixelFormat: .r8Unorm)
                let textureCbCr = try texture(sampleBuffer: sampleBuffer, textureCache: textureCache, planeIndex: 1, pixelFormat: .rg8Unorm)
                textures = [textureY, textureCbCr]
            }

            let timestamp = try self.timestamp(sampleBuffer: sampleBuffer)

            delegate?.metalCameraSession(self, didReceiveFrameAsTextures: textures, withTimestamp: timestamp)
        }
        catch let error as MetalCameraSessionError {
            self.handleError(error)
        }
        catch {
            print(error.localizedDescription)
        }
    }
#endif
}

使用这个类可以了解不同的会话类型以及可能发生的错误(MetalCameraSessionTypes):

import AVFoundation
public enum MetalCameraSessionState {
    case ready
    case streaming
    case stopped
    case waiting
    case error
}
public enum MetalCameraPixelFormat {
    case rgb
    case yCbCr
    var coreVideoType: OSType {
        switch self {
        case .rgb:
            return kCVPixelFormatType_32BGRA
        case .yCbCr:
            return kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
        }
    }
}
public enum MetalCameraSessionError: Error {
    case noHardwareAccess
    case failedToAddCaptureInputDevice
    case failedToAddCaptureOutput
    case requestedHardwareNotFound
    case inputDeviceNotAvailable
    case captureSessionRuntimeError
    case failedToCreateTextureCache
    case missingSampleBuffer
    case failedToGetImageBuffer
    case failedToCreateTextureFromImage
    case failedToRetrieveTimestamp
    public func isStreamingError() -> Bool {
        switch self {
        case .noHardwareAccess, .failedToAddCaptureInputDevice, .failedToAddCaptureOutput, .requestedHardwareNotFound, .inputDeviceNotAvailable, .captureSessionRuntimeError:
            return true
        default:
            return false
        }
    }
    public var localizedDescription: String {
        switch self {
        case .noHardwareAccess:
            return "Failed to get access to the hardware for a given media type."
        case .failedToAddCaptureInputDevice:
            return "Failed to add a capture input device to the capture session."
        case .failedToAddCaptureOutput:
            return "Failed to add a capture output data channel to the capture session."
        case .requestedHardwareNotFound:
            return "Specified hardware is not available on this device."
        case .inputDeviceNotAvailable:
            return "Capture input device cannot be opened, probably because it is no longer available or because it is in use."
        case .captureSessionRuntimeError:
            return "AVCaptureSession runtime error."
        case .failedToCreateTextureCache:
            return "Failed to initialize texture cache."
        case .missingSampleBuffer:
            return "No sample buffer to convert the image from."
        case .failedToGetImageBuffer:
            return "Failed to retrieve an image buffer from camera's output sample buffer."
        case .failedToCreateTextureFromImage:
            return "Failed to convert the frame to a Metal texture."
        case .failedToRetrieveTimestamp:
            return "Failed to retrieve timestamp from the sample buffer."
        }
    }
}

然后,您可以使用一个包装器来包装AVFoundationAVCaptureDevice,该包装器具有实例方法而不是类方法(MetalCameraCaptureDevice):

import AVFoundation
internal class MetalCameraCaptureDevice {
    internal func device(mediaType: String, position: AVCaptureDevicePosition) -> AVCaptureDevice? {
        guard let devices = AVCaptureDevice.devices(withMediaType: mediaType) as? [AVCaptureDevice] else { return nil }

        if let index = devices.index(where: { $0.position == position }) {
            return devices[index]
        }
        return nil
    }
    internal func requestAccessForMediaType(_ mediaType: String!, completionHandler handler: ((Bool) -> Void)!) {
        AVCaptureDevice.requestAccess(forMediaType: mediaType, completionHandler: handler)
    }
}

然后,您可以拥有一个自定义的viewController类来控制相机,就像这个(CameraViewController):

import UIKit
import Metal
internal final class CameraViewController: MTKViewController {
    var session: MetalCameraSession?
    override func viewDidLoad() {
        super.viewDidLoad()
        session = MetalCameraSession(delegate: self)
    }
    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)
        session?.start()
    }
    override func viewDidDisappear(_ animated: Bool) {
        super.viewDidDisappear(animated)
        session?.stop()
    }
}
// MARK: - MetalCameraSessionDelegate
extension CameraViewController: MetalCameraSessionDelegate {
    func metalCameraSession(_ session: MetalCameraSession, didReceiveFrameAsTextures textures: [MTLTexture], withTimestamp timestamp: Double) {
        self.texture = textures[0]
    }
    func metalCameraSession(_ cameraSession: MetalCameraSession, didUpdateState state: MetalCameraSessionState, error: MetalCameraSessionError?) {
        if error == .captureSessionRuntimeError {
            print(error?.localizedDescription ?? "None")
            cameraSession.start()
        }
        DispatchQueue.main.async { 
            self.title = "Metal camera: \(state)"
        }
        print("Session changed state to \(state) with error: \(error?.localizedDescription ?? "None").")
    }
}

最终你的类可以像这个(MTKViewController)一样,其中你有
public func draw(in: MTKView)

从缓冲相机中获取您所期望的MTLTexture

import UIKit
import Metal
#if arch(i386) || arch(x86_64)
#else
    import MetalKit
#endif
open class MTKViewController: UIViewController {
    open var texture: MTLTexture?
    open func willRenderTexture(_ texture: inout MTLTexture, withCommandBuffer commandBuffer: MTLCommandBuffer, device: MTLDevice) {
    }
    open func didRenderTexture(_ texture: MTLTexture, withCommandBuffer commandBuffer: MTLCommandBuffer, device: MTLDevice) {
    }
    override open func loadView() {
        super.loadView()
#if arch(i386) || arch(x86_64)
        NSLog("Failed creating a default system Metal device, since Metal is not available on iOS Simulator.")
#else
        assert(device != nil, "Failed creating a default system Metal device. Please, make sure Metal is available on your hardware.")
#endif
        initializeMetalView()
        initializeRenderPipelineState()
    }
    fileprivate func initializeMetalView() {
#if arch(i386) || arch(x86_64)
#else
        metalView = MTKView(frame: view.bounds, device: device)
        metalView.delegate = self
        metalView.framebufferOnly = true
        metalView.colorPixelFormat = .bgra8Unorm
        metalView.contentScaleFactor = UIScreen.main.scale
        metalView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
        view.insertSubview(metalView, at: 0)
#endif
    }
#if arch(i386) || arch(x86_64)
#else
    internal var metalView: MTKView!
#endif
    internal var device = MTLCreateSystemDefaultDevice()
    internal var renderPipelineState: MTLRenderPipelineState?
    fileprivate let semaphore = DispatchSemaphore(value: 1)
    fileprivate func initializeRenderPipelineState() {
        guard
            let device = device,
            let library = device.newDefaultLibrary()
        else { return }

        let pipelineDescriptor = MTLRenderPipelineDescriptor()
        pipelineDescriptor.sampleCount = 1
        pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
        pipelineDescriptor.depthAttachmentPixelFormat = .invalid
        pipelineDescriptor.vertexFunction = library.makeFunction(name: "mapTexture")
        pipelineDescriptor.fragmentFunction = library.makeFunction(name: "displayTexture")
        do {
            try renderPipelineState = device.makeRenderPipelineState(descriptor: pipelineDescriptor)
        }
        catch {
            assertionFailure("Failed creating a render state pipeline. Can't render the texture without one.")
            return
        }
    }
}
#if arch(i386) || arch(x86_64)
#else
extension MTKViewController: MTKViewDelegate {
    public func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
        NSLog("MTKView drawable size will change to \(size)")
    }
    public func draw(in: MTKView) {
        _ = semaphore.wait(timeout: DispatchTime.distantFuture)
        autoreleasepool {
            guard
                var texture = texture,
                let device = device
            else {
                _ = semaphore.signal()
                return
            }
            let commandBuffer = device.makeCommandQueue().makeCommandBuffer()
            willRenderTexture(&texture, withCommandBuffer: commandBuffer, device: device)
            render(texture: texture, withCommandBuffer: commandBuffer, device: device)
        }
    }
    private func render(texture: MTLTexture, withCommandBuffer commandBuffer: MTLCommandBuffer, device: MTLDevice) {
        guard
            let currentRenderPassDescriptor = metalView.currentRenderPassDescriptor,
            let currentDrawable = metalView.currentDrawable,
            let renderPipelineState = renderPipelineState
        else {
            semaphore.signal()
            return
        }
        let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: currentRenderPassDescriptor)
        encoder.pushDebugGroup("RenderFrame")
        encoder.setRenderPipelineState(renderPipelineState)
        encoder.setFragmentTexture(texture, at: 0)
        encoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1)
        encoder.popDebugGroup()
        encoder.endEncoding()
        commandBuffer.addScheduledHandler { [weak self] (buffer) in
            guard let unwrappedSelf = self else { return }

            unwrappedSelf.didRenderTexture(texture, withCommandBuffer: buffer, device: device)
            unwrappedSelf.semaphore.signal()
        }
        commandBuffer.present(currentDrawable)
        commandBuffer.commit()
    }
}
#endif

现在你已经拥有了所有源代码,但你也可以在作者navoshta的GitHUB项目这里找到完整的代码注释和说明以及关于该项目的一个很棒的教程这里,特别是第二部分,你可以获取纹理(你可以在MetalCameraSession类中找到此代码):

guard
    let unwrappedImageTexture = imageTexture,
    let texture = CVMetalTextureGetTexture(unwrappedImageTexture),
    result == kCVReturnSuccess
else {
    throw MetalCameraSessionError.failedToCreateTextureFromImage
}

1
这是来自代码库的大量源代码,但并没有真正告诉我从哪里开始查找我的代码中的错误,即使这些源代码可以解决问题。乍一看,我似乎正在做与这些源代码几乎完全相同的事情,你有什么线索可以避免闪烁问题吗? - Blixt
我认为你应该把注意力集中在捕获会话上,以确保你的缓冲区没有被错误的线程时间所影响,例如捕获开始和设备实际状态的时间。MTKViewController 中控制缓冲区流程的信号量非常出色,确保了管道的正确构建。 - Alessandro Ornano
你尝试过这个库吗? - Alessandro Ornano

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接