如何使用YUV颜色空间将UIImage转换为CVPixelBufferRef?

3
我正在进行视频录制。我需要将一个视图截屏为UIImage,然后转换为CVPixelBufferRef。在RGBA颜色空间下运行得很好。但我需要的CVPixelBufferRef应该是YUV色彩空间的。
有人有任何想法吗?谢谢。
+ (CVPixelBufferRef) pixelBufferFromLayer:(CALayer *)layer forSize:(CGSize)size
{
    UIImage * image = [self fetchScreenShotFromLayer:layer forSize:size];

// this worked fine
//    CVPixelBufferRef rgbBuffer = [self RGBPixelBufferFromCGImage:image.CGImage];
//    return rgbBuffer;

//    NSData * imageData = UIImageJPEGRepresentation(image, 0.5);
    NSData * imageData = UIImagePNGRepresentation(image);
    CVPixelBufferRef buffer = [self yuvPixelBufferWithData:imageData width:size.width heigth:size.height];
    return buffer;
}

使用RGB颜色空间创建CVPixelBufferRef是可以的。
// RGB color space pixel buffer
+ (CVPixelBufferRef) RGBPixelBufferFromCGImage:(CGImageRef)image
{
    NSDictionary * options = @{
                               (NSString *)kCVPixelBufferCGImageCompatibilityKey: @(YES),
                               (NSString *)kCVPixelBufferCGBitmapContextCompatibilityKey: @(YES),
                               };

    CVPixelBufferRef pxbuffer = NULL;

    CGFloat frameWidth = CGImageGetWidth(image);
    CGFloat frameHeight = CGImageGetHeight(image);

    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options, &pxbuffer);

    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void * pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, frameWidth, frameHeight, 8, CVPixelBufferGetBytesPerRow(pxbuffer), rgbColorSpace, (CGBitmapInfo)kCGImageAlphaNoneSkipFirst);

    NSParameterAssert(context);
    CGContextConcatCTM(context, CGAffineTransformIdentity);
    CGContextDrawImage(context, CGRectMake(0, 0, frameWidth, frameHeight), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

// snapshoot for layer
+ (UIImage *) fetchScreenShotFromLayer:(CALayer *)layer forSize:(CGSize)size
{
    UIImage * image = nil;

    @autoreleasepool {
        NSLock * aLock = [NSLock new];
        [aLock lock];

        CGSize imageSize = size;
        UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
        CGContextRef context = UIGraphicsGetCurrentContext();
        [layer renderInContext:context];
        image = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();

        [aLock unlock];
    }

    return image;
}

这个有问题。

// data to yuv buffer
+ (CVPixelBufferRef)yuvPixelBufferWithData:(NSData *)dataFrame
                                     width:(size_t)w
                                    heigth:(size_t)h
{
    unsigned char* buffer = (unsigned char*) dataFrame.bytes;
    CVPixelBufferRef getCroppedPixelBuffer = [self copyDataFromBuffer:buffer toYUVPixelBufferWithWidth:w Height:h];
    return getCroppedPixelBuffer;
}

+ (CVPixelBufferRef) copyDataFromBuffer:(const unsigned char*)buffer toYUVPixelBufferWithWidth:(size_t)w Height:(size_t)h
{
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];

    CVPixelBufferRef pixelBuffer;
    CVPixelBufferCreate(NULL, w, h, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, (__bridge CFDictionaryRef)(options), &pixelBuffer);

    size_t count = CVPixelBufferGetPlaneCount(pixelBuffer);
    NSLog(@"PlaneCount = %zu", count);  // 2

    CVPixelBufferLockBaseAddress(pixelBuffer, 0);

    size_t d = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
    const unsigned char* src = buffer;
    unsigned char* dst = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

    for (unsigned int rIdx = 0; rIdx < h; ++rIdx, dst += d, src += w) {
        memcpy(dst, src, w);
    }

    d = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
    dst = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    h = h >> 1;
    for (unsigned int rIdx = 0; rIdx < h; ++rIdx, dst += d, src += w) {
        memcpy(dst, src, w);
    }

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

    return pixelBuffer;
}

这里是图片。

原始图像

RGB缓冲区

YUV缓冲区

感谢您的帮助。

1个回答

1

你可以从UIImage中获取RGBA格式的原始数据,然后将其转换为yuv格式,并使用yuv数据填充从CMSampleBufferRef获取的CVPixelBufferRef的不同平面。这里的CMSampleBufferRef是captureOutput的参数。 只需在初始化AVCaptureSession的AVCaptureVideoDataOutput时将视频设置为kCVPixelFormatType_420YpCbCr8BiPlanarFullRange(如果这是您想要的),就可以了。


你能指导我们如何访问UIImage的原始数据,并将这些数据转换为YUV格式吗? - Jovan
1
获取UIImage的原始数据: https://dev59.com/qHRB5IYBdhLWcg3w-8A9还有很多关于从YUV转换到RGB或相反方向的答案,这里提供一种方法: https://dev59.com/EGMm5IYBdhLWcg3wFL04 - Xiaoqi

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接