使用IOSurface支持创建YUV格式的CVPixelBuffer

9
我从一个网络回调(voip应用程序)中获取了3个分离的YUV数据数组。据我所知,根据此处(链接),您不能使用CVPixelBufferCreateWithPlanarBytes创建基于IOSurface支持的像素缓冲区。

重要提示:您无法在使用kCVPixelBufferIOSurfacePropertiesKey时使用CVPixelBufferCreateWithBytes()或CVPixelBufferCreateWithPlanarBytes()。调用CVPixelBufferCreateWithBytes()或CVPixelBufferCreateWithPlanarBytes()将导致不支持IOSurface的CVPixelBuffers。

因此,您必须使用CVPixelBufferCreate进行创建,但是如何将回调数据传输到您创建的CVPixelBufferRef呢?
- (void)videoCallBack(uint8_t *yPlane, uint8_t *uPlane, uint8_t *vPlane, size_t width, size_t height, size_t stride yStride,
                      size_t uStride, size_t vStride)
    NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
    CVPixelBufferRef pixelBuffer = NULL;
    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          width,
                                          height,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);

我不确定接下来该怎么做?最终我想将其转换为CIImage,然后使用GLKView渲染视频。人们如何将数据“放入”从创建它时的缓冲区中?

3个回答

10

我找出来了,它其实很简单。下面是完整的代码。唯一的问题是我收到了一个BSXPCMessage received error for message: Connection interrupted错误,并且需要等一段时间才能看到视频。

NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                      width,
                                      height,
                                      kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                      (__bridge CFDictionaryRef)(pixelAttributes),
                                      &pixelBuffer);

CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPlane, width * height);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

if (result != kCVReturnSuccess) {
    DDLogWarn(@"Unable to create cvpixelbuffer %d", result);
}

CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; //success!
CVPixelBufferRelease(pixelBuffer);

我忘记添加交错两个U和V平面的代码了,但这应该不太麻烦。


1
请提供U和V平面的代码以及色度通道元素数量numberOfElementsForChroma。 - JULIIncognito
@JULIIncognito,我相信uvPlane只是一个uint_8数组。这是很久以前的事情了,抱歉。numberOfElementsForChroma将是您需要的任何固定大小。 - cvu
@JULIIncognito 是 numberOfElementsForChroma 和 width * height 相同吗?因为它不起作用。 - lilouch
@lilouch 对我也没用。所以,我只是将颜色格式更改为kCVPixelFormatType_32BGRA,并进行了另一次转换。 - JULIIncognito
对于 Y420 或 NV12 格式,chroma 分量的像素数量为 width*height/2。这是因为 U 分量和 V 分量都是 Y 分量的四分之一大小。 - ooOlly

4

这里是使用obj-c的完整转换代码。
对于那些说“这很简单”的天才,请不要瞧不起任何人!如果你在这里帮忙,就请帮忙;如果你在这里想展示自己多么“聪明”,请到其他地方去。
这是YUV处理详细说明链接:www.glebsoft.com

    /// method to convert YUV buffers to pixelBuffer in otder to feed it to face unity methods
-(CVPixelBufferRef*)pixelBufferFromYUV:(uint8_t *)yBuffer vBuffer:(uint8_t *)uBuffer uBuffer:(uint8_t *)vBuffer width:(int)width height:(int)height  {
    NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
    CVPixelBufferRef pixelBuffer;
    /// NumberOfElementsForChroma is width*height/4 because both U plane and V plane are quarter size of Y plane
    CGFloat uPlaneSize =  width * height / 4;
    CGFloat vPlaneSize = width * height / 4;
    CGFloat numberOfElementsForChroma = uPlaneSize + vPlaneSize;

    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          width,
                                          height,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);

    ///for simplicity and speed create a combined UV panel to hold the pixels
    uint8_t *uvPlane = calloc(numberOfElementsForChroma, sizeof(uint8_t));
    memcpy(uvPlane, uBuffer, uPlaneSize);
    memcpy(uvPlane += (uint8_t)(uPlaneSize), vBuffer, vPlaneSize);
    
    
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    memcpy(yDestPlane, yBuffer, width * height);
    
    uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    CVPixelBufferRelease(pixelBuffer);
    free(uvPlane);
    return pixelBuffer;
}

3

我有一个类似的问题,这里是我在SWIFT 2.0中得到的答案,包括其他问题的回答或链接提供的信息。

func generatePixelBufferFromYUV2(inout yuvFrame: YUVFrame) -> CVPixelBufferRef?
{
    var uIndex: Int
    var vIndex: Int
    var uvDataIndex: Int
    var pixelBuffer: CVPixelBufferRef? = nil
    var err: CVReturn;

    if (m_pixelBuffer == nil)
    {
        err = CVPixelBufferCreate(kCFAllocatorDefault, yuvFrame.width, yuvFrame.height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, nil, &pixelBuffer)
        if (err != 0) {
            NSLog("Error at CVPixelBufferCreate %d", err)
            return nil
        }
    }

    if (pixelBuffer != nil)
    {
        CVPixelBufferLockBaseAddress(pixelBuffer!, 0)
        let yBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer!, 0)
        if (yBaseAddress != nil)
        {
            let yData = UnsafeMutablePointer<UInt8>(yBaseAddress)
            let yDataPtr = UnsafePointer<UInt8>(yuvFrame.luma.bytes)

            // Y-plane data
            memcpy(yData, yDataPtr, yuvFrame.luma.length)
        }

        let uvBaseAddress = CVPixelBufferGetBaseAddressOfPlane(m_pixelBuffer!, 1)
        if (uvBaseAddress != nil)
        {
            let uvData = UnsafeMutablePointer<UInt8>(uvBaseAddress)
            let pUPointer = UnsafePointer<UInt8>(yuvFrame.chromaB.bytes)
            let pVPointer = UnsafePointer<UInt8>(yuvFrame.chromaR.bytes)

            // For the uv data, we need to interleave them as uvuvuvuv....
            let iuvRow = (yuvFrame.chromaB.length*2/yuvFrame.width)
            let iHalfWidth = yuvFrame.width/2

            for i in 0..<iuvRow
            {
                for j in 0..<(iHalfWidth)
                {
                    // UV data for original frame.  Just interleave them.
                    uvDataIndex = i*iHalfWidth+j
                    uIndex = (i*yuvFrame.width) + (j*2)
                    vIndex = uIndex + 1
                    uvData[uIndex] = pUPointer[uvDataIndex]
                    uvData[vIndex] = pVPointer[uvDataIndex]
                }
            }
        }
        CVPixelBufferUnlockBaseAddress(pixelBuffer!, 0)
    }

    return pixelBuffer
}

注意:yuvFrame是一个包含y、u和v平面缓冲区,以及宽度和高度的结构体。另外,在CVPixelBufferCreate(...)中,我将CFDictionary?参数设置为nil。如果我给它IOSurface属性,它将失败并抱怨它不是IOSurface支持或错误-6683。
访问以下链接以获取更多信息: 此链接涉及UV交错: 如何将YUV转换为iOS的CIImage 以及相关问题: CVOpenGLESTextureCacheCreateTextureFromImage返回错误6683

我有一些类似的任务需要执行,能否给我 YUVFrame 结构体和转换代码的要点。从这里开始,我无法获得将 Y-U-V 指针转换为像素缓冲区的结果。谢谢。 - Agent Smith
我的代码是Objective-C和SWIFT的混合体。我所拥有的结构体是Objective-C编写的,它包含三个NSData和两个NSInteger: @property (strong, nonatomic) NSData *luma; @property (strong, nonatomic) NSData *chromaB; @property (strong, nonatomic) NSData *chromaR; @property NSInteger width; @property NSInteger height; //在property前面的@已经被移除 - mdang

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接