CGBitmapContextRef
只能绘制类似于
32ARGB
的图像,这意味着您需要创建
ARGB
(或
RGBA
)缓冲区,然后找到一种快速将
YUV
像素转移到此
ARGB
表面的方法。该过程包括使用
CoreImage
,通过池自制
CVPixelBufferRef
,引用您自制像素缓冲区的
CGBitmapContextRef
,然后重新创建一个类似于输入缓冲区但引用输出像素的
CMSampleBufferRef
。换句话说,
- 将传入的像素提取到
CIImage
中。
- 使用您正在创建的像素格式和输出尺寸创建
CVPixelBufferPool
。在实时情况下,不要创建没有池的CVPixelBuffer
:如果生产者太快,则会耗尽内存;如果您不重复使用缓冲区,则会分段RAM;这是浪费周期。
- 使用默认构造函数创建
CIContext
,并在缓冲区之间共享。它不包含外部状态,但文档表示,在每个帧上重新创建它非常昂贵。
- 在传入的帧上,创建一个新的像素缓冲区。确保使用分配阈值,以便您不会出现失控的RAM使用情况。
- 锁定像素缓冲区
- 创建引用像素缓冲区中字节的位图上下文
- 使用CIContext将平面图像数据渲染到线性缓冲区中
- 在CGContext中执行应用程序特定的绘制!
- 解锁像素缓冲区
- 获取原始样本缓冲区的时间信息
- 通过询问像素缓冲区获取其精确格式来创建
CMVideoFormatDescriptionRef
- 为像素缓冲区创建样本缓冲区。完成!
以下是一个示例实现,在其中我选择了32ARGB作为要处理的图像格式,因为这是iOS上
CGBitmapContext
和
CoreVideo
都能很好地处理的格式:
{
CGPixelBufferPoolRef *_pool;
CGSize _poolBufferDimensions;
}
- (void)_processSampleBuffer:(CMSampleBufferRef)inputBuffer
{
CVPixelBufferRef inputPixels = CMSampleBufferGetImageBuffer(inputBuffer);
CIImage *inputImage = [CIImage imageWithCVPixelBuffer:inputPixels];
CGSize bufferDimensions = {CVPixelBufferGetWidth(inputPixels), CVPixelBufferGetHeight(inputPixels)};
if(!_pool || !CGSizeEqualToSize(bufferDimensions, _poolBufferDimensions)) {
if(_pool) {
CFRelease(_pool);
}
OSStatus ok0 = CVPixelBufferPoolCreate(NULL,
NULL,
(__bridge CFDictionaryRef)(@{
(id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32ARGB),
(id)kCVPixelBufferWidthKey: @(bufferDimensions.width),
(id)kCVPixelBufferHeightKey: @(bufferDimensions.height),
}),
&_pool
);
_poolBufferDimensions = bufferDimensions;
assert(ok0 == noErr);
}
CVPixelBufferRef outputPixels;
OSStatus ok1 = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes(NULL,
_pool,
(__bridge CFDictionaryRef)@{
(__bridge id)kCVPixelBufferPoolAllocationThresholdKey: @20
},
&outputPixels
);
if(ok1 == kCVReturnWouldExceedAllocationThreshold) {
return;
}
assert(ok1 == noErr);
CGColorSpaceRef deviceColors = CGColorSpaceCreateDeviceRGB();
OSStatus ok2 = CVPixelBufferLockBaseAddress(outputPixels, 0);
assert(ok2 == noErr);
CGContextRef cg = CGBitmapContextCreate(
CVPixelBufferGetBaseAddress(outputPixels),
CVPixelBufferGetWidth(inputPixels), CVPixelBufferGetHeight(inputPixels),
8,
CVPixelBufferGetBytesPerRow(outputPixels),
deviceColors,
kCGImageAlphaPremultipliedFirst
);
CFRelease(deviceColors);
assert(cg != NULL);
[_imageContext render:inputImage toCVPixelBuffer:outputPixels];
CGContextSetRGBFillColor(cg, 0.5, 0, 0, 1);
CGContextSetTextDrawingMode(cg, kCGTextFill);
NSAttributedString *text = [[NSAttributedString alloc] initWithString:@"Hello world" attributes:NULL];
CTLineRef line = CTLineCreateWithAttributedString((__bridge CFAttributedStringRef)text);
CTLineDraw(line, cg);
CFRelease(line);
CFRelease(cg);
CVPixelBufferUnlockBaseAddress(outputPixels, 0);
CMSampleTimingInfo timingInfo;
OSStatus ok4 = CMSampleBufferGetSampleTimingInfo(inputBuffer, 0, &timingInfo);
assert(ok4 == noErr);
CMVideoFormatDescriptionRef videoFormat;
OSStatus ok5 = CMVideoFormatDescriptionCreateForImageBuffer(NULL, outputPixels, &videoFormat);
assert(ok5 == noErr);
CMSampleBufferRef outputBuffer;
OSStatus ok3 = CMSampleBufferCreateForImageBuffer(NULL,
outputPixels,
YES,
NULL,
NULL,
videoFormat,
&timingInfo,
&outputBuffer
);
assert(ok3 == noErr);
[_consumer consumeSampleBuffer:outputBuffer];
CFRelease(outputPixels);
CFRelease(videoFormat);
CFRelease(outputBuffer);
}