视频像素缓冲池错误(kCVReturnInvalidArgument/-6661)

9
我已经使用Swift实现了之前的建议(如何在iPhone中使用CVPixelBufferPool与AVAssetWriterInputPixelBufferAdaptor?),但是当按照指导使用CVPixelBufferPoolCreatePixelBuffer时,遇到了“kCVReturnInvalidArgument”(错误值:-6661)。

基本上我正在尝试从图像创建电影,但由于缓冲池未成功创建,因此无法附加像素缓冲区--以下是执行此操作的代码。

非常感谢您提供任何建议!

import Foundation
import Photos
import OpenGLES
import AVFoundation
import CoreMedia

class MovieGenerator {

    var _videoWriter:AVAssetWriter
    var _videoWriterInput: AVAssetWriterInput
    var _adapter: AVAssetWriterInputPixelBufferAdaptor
    var _buffer = UnsafeMutablePointer<Unmanaged<CVPixelBuffer>?>.alloc(1)


    init(frameSize size: CGSize, outputURL url: NSURL) {

    // delete file if exists
    let sharedManager = NSFileManager.defaultManager() as NSFileManager
    if(sharedManager.fileExistsAtPath(url.path!)) {
        sharedManager.removeItemAtPath(url.path, error: nil)
    }

    // video writer
    _videoWriter = AVAssetWriter(URL: url, fileType: AVFileTypeQuickTimeMovie, error: nil)

    // writer input
    var videoSettings = [AVVideoCodecKey:AVVideoCodecH264, AVVideoWidthKey:size.width, AVVideoHeightKey:size.height]
    _videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
    _videoWriterInput.expectsMediaDataInRealTime = true
    _videoWriter.addInput(_videoWriterInput)

    // pixel buffer adapter
    var adapterAttributes = [kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey: size.width,
        kCVPixelBufferHeightKey: size.height,
        kCVPixelFormatOpenGLESCompatibility: kCFBooleanTrue]

    _adapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: _videoWriterInput, sourcePixelBufferAttributes: adapterAttributes)
    var poolCreateResult:CVReturn = CVPixelBufferPoolCreatePixelBuffer(nil, _adapter.pixelBufferPool, _buffer)
    println("pool creation:\(poolCreateResult)")

    _videoWriter.startWriting()
    _videoWriter.startSessionAtSourceTime(kCMTimeZero)

}

func addImage(image:UIImage, frameNum:Int, fps:Int)->Bool {


    self.createPixelBufferFromCGImage(image.CGImage, pixelBufferPtr: _buffer)

    var presentTime:CMTime = CMTimeMake(Int64(frameNum), Int32(fps))
    var result:Bool = _adapter.appendPixelBuffer(_buffer.memory?.takeUnretainedValue(), withPresentationTime: presentTime)

    return result
}

func finalizeMovie(timeStamp: CMTime) {
    _videoWriterInput.markAsFinished()
    _videoWriter.endSessionAtSourceTime(timeStamp)
    _videoWriter.finishWritingWithCompletionHandler({println("video writer finished with status: \(self._videoWriter.status)")})
}

func createPixelBufferFromCGImage(image: CGImage, pixelBufferPtr: UnsafeMutablePointer<Unmanaged<CVPixelBuffer>?>) {

    let width:UInt = CGImageGetWidth(image)
    let height:UInt = CGImageGetHeight(image)

    let imageData:CFData = CGDataProviderCopyData(CGImageGetDataProvider(image))
    let options:CFDictionary = [kCVPixelBufferCGImageCompatibilityKey:NSNumber.numberWithBool(true), kCVPixelBufferCGBitmapContextCompatibilityKey:NSNumber.numberWithBool(true)]

    var status:CVReturn = CVPixelBufferCreate(kCFAllocatorDefault, width, height, OSType(kCVPixelFormatType_32BGRA), options, pixelBufferPtr)
    assert(status != 0,"CVPixelBufferCreate: \(status)")

    var lockStatus:CVReturn = CVPixelBufferLockBaseAddress(pixelBufferPtr.memory?.takeUnretainedValue(), 0)
    println("CVPixelBufferLockBaseAddress: \(lockStatus)")

    var pxData:UnsafeMutablePointer<(Void)> = CVPixelBufferGetBaseAddress(pixelBufferPtr.memory?.takeUnretainedValue())
    let bitmapinfo = CGBitmapInfo.fromRaw(CGImageAlphaInfo.NoneSkipFirst.toRaw())
    let rgbColorSpace:CGColorSpace = CGColorSpaceCreateDeviceRGB()

    var context:CGContextRef = CGBitmapContextCreate(pxData, width, height, 8, 4*CGImageGetWidth(image), rgbColorSpace, bitmapinfo!)

    CGContextDrawImage(context, CGRectMake(0, 0, CGFloat(width), CGFloat(height)), image)

    CVPixelBufferUnlockBaseAddress(pixelBufferPtr.memory?.takeUnretainedValue(), 0)


}



}

我通过在 obj-c 中实现我的 'createPixelBufferFromCGImage' 函数并从那里调用 CVPixelBufferPoolCreatePixelBuffer 来解决(或更准确地说是绕过)了这个问题。 我猜测我们的 Swift CVPixelBuffer 指针与 CVPixelBufferPoolCreatePixelBuffer 不兼容,从而创建了“无效参数”错误。 如果我成功实现一个纯 Swift 版本,我会发布一个答案。 - acj
2个回答

5
我无法准确回答你的问题,令人沮丧的是,但我正在编写的代码基本上做了同样的事情。而且我的代码比你收到的错误更进一步; 它一直尝试将图像添加到电影中,然后通过从 appendPixelBuffer() 中永远没有获得成功结果来简单地失败 - 我不确定如何找出原因。我希望这可以帮助你更进一步。
(我的代码改编自 AVFoundation + AssetWriter: Generate Movie With Images and Audio,我使用你的帖子来帮助导航某些指针interop鬼把戏...)
func writeAnimationToMovie(path: String, size: CGSize, animation: Animation) -> Bool {
    var error: NSError?
    let writer = AVAssetWriter(URL: NSURL(fileURLWithPath: path), fileType: AVFileTypeQuickTimeMovie, error: &error)

    let videoSettings = [AVVideoCodecKey: AVVideoCodecH264, AVVideoWidthKey: size.width, AVVideoHeightKey: size.height]

    let input = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
    let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: input, sourcePixelBufferAttributes: nil)
    input.expectsMediaDataInRealTime = true
    writer.addInput(input)

    writer.startWriting()
    writer.startSessionAtSourceTime(kCMTimeZero)

    var buffer: CVPixelBufferRef

    var frameCount = 0
    for frame in animation.frames {
        let rect = CGRectMake(0, 0, size.width, size.height)
        let rectPtr = UnsafeMutablePointer<CGRect>.alloc(1)
        rectPtr.memory = rect
        buffer = pixelBufferFromCGImage(frame.image.CGImageForProposedRect(rectPtr, context: nil, hints: nil).takeUnretainedValue(), size)
        var appendOk = false
        var j = 0
        while (!appendOk && j < 30) {
            if pixelBufferAdaptor.assetWriterInput.readyForMoreMediaData {
                let frameTime = CMTimeMake(Int64(frameCount), 10)
                appendOk = pixelBufferAdaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime)
                // appendOk will always be false
                NSThread.sleepForTimeInterval(0.05)
            } else {
                NSThread.sleepForTimeInterval(0.1)
            }
            j++
        }
        if (!appendOk) {
            println("Doh, frame \(frame) at offset \(frameCount) failed to append")
        }
    }

    input.markAsFinished()
    writer.finishWritingWithCompletionHandler({
        if writer.status == AVAssetWriterStatus.Failed {
            println("oh noes, an error: \(writer.error.description)")
        } else {
            println("hrmmm, there should be a movie?")
        }
    })

    return true;
}

其中pixelBufferFromCGImage的定义如下:

func pixelBufferFromCGImage(image: CGImageRef, size: CGSize) -> CVPixelBufferRef {
    let options = [
        kCVPixelBufferCGImageCompatibilityKey: true,
        kCVPixelBufferCGBitmapContextCompatibilityKey: true]
    var pixBufferPointer = UnsafeMutablePointer<Unmanaged<CVPixelBuffer>?>.alloc(1)

    let status = CVPixelBufferCreate(
        nil,
        UInt(size.width), UInt(size.height),
        OSType(kCVPixelFormatType_32ARGB),
        options,
        pixBufferPointer)

    CVPixelBufferLockBaseAddress(pixBufferPointer.memory?.takeUnretainedValue(), 0)

    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
    let bitmapinfo = CGBitmapInfo.fromRaw(CGImageAlphaInfo.NoneSkipFirst.toRaw())

    var pixBufferData:UnsafeMutablePointer<(Void)> = CVPixelBufferGetBaseAddress(pixBufferPointer.memory?.takeUnretainedValue())

    let context = CGBitmapContextCreate(
        pixBufferData,
        UInt(size.width), UInt(size.height),
        8, UInt(4 * size.width),
        rgbColorSpace, bitmapinfo!)

    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0))
    CGContextDrawImage(
        context,
        CGRectMake(0, 0, CGFloat(CGImageGetWidth(image)), CGFloat(CGImageGetHeight(image))),
        image)

    CVPixelBufferUnlockBaseAddress(pixBufferPointer.memory?.takeUnretainedValue(), 0)
    return pixBufferPointer.memory!.takeUnretainedValue()
}

1
啊,我真傻。我实际上没有在输入上调用markAsFinished和在写入器上调用finishWritingWithCompletionHandler() - Eric O'Connell
你最终搞定了吗?我也在做同样的事情(将Objective C代码移植到Swift)。 - user1379417

5

根据 文档 关于 pixelBufferPool 的说明:

在关联的 AVAssetWriter 对象的第一次调用 startSessionAtTime:on 之前,此属性为 NULL。

将对 CVPixelBufferPoolCreatePixelBuffer 的调用移动到 init 的末尾应该可以解决即时问题。

还有几点观察:

  • 您的AVAssetWriterInputPixelBufferAdaptor已配置为BGRA,但在createPixelBufferFromCGImage中使用的是RGB。如果像素格式不匹配,最终的视频将看起来很奇怪。
  • 您不需要在createPixelBufferFromCGImage方法中调用CVPixelBufferCreate。这将使使用缓冲池的目的失去意义。
  • 如果您在紧密循环中运行此操作,内存消耗将成为一个问题。使用autoreleasepool并小心使用takeUnretainedValuetakeRetainedValue将有所帮助。

我已发布Swift 1.2、2.0和3.0的参考实现,这些实现使用了缓冲池。


如果将AVAssetWriterInputPixelBufferAdaptor的属性设置为nil会怎样呢?其他SO答案似乎是这样做的。您能解释一下为什么要明确设置属性吗?感谢分享代码! - Crashalot
在你的2.0代码中,为什么要在requestMediaDataWhenReadyOnQueue的块内设置frameCount = Int64(0)?这不意味着每次调用该块时frameCount都会重置为0吗?例如,如果您写出100张图像,然后readyForMoreMediaData变为false,当您再次从第101张图像开始时,frameCount将重置为0,而不是101? - Crashalot
@Crashalot 你可以使用 nil 来代替属性。我曾经尝试过不同维度和颜色空间的实验,然后就把代码留在了原地。// 对于我在测试中使用的照片集,readyForMoreMediaData 总是为真,并且没有触发你所描述的情况。虽然你说得对,它可能会带来问题,但出于鲁棒性的考虑,你应该在块之外维护那个状态(frameCount 等)。 - acj
好的,很酷,那就是我们的假设。那么标记写入者为完成状态也应该是如此吧?您是否需要一个修改过的代码版本来考虑这些更改?非常感谢您的分享,太棒了!另外,使用 nil 会产生一个只有绿屏的视频,因此似乎您必须定义 AVAssetWriterInputPixelBufferAdaptor 属性,至少对于您的代码而言是这样。 - Crashalot
已完成Swift版本的移植,请在发现问题时提出改进意见:https://dev59.com/5G865IYBdhLWcg3wlvmq#36297656 - Crashalot

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接