有没有比使用glReadPixels更快的访问帧缓冲区的方法?我需要只读访问帧缓冲区中的一个小矩形渲染区域,以便在CPU中进一步处理数据。由于需要反复执行此操作,因此性能很重要。我在网上搜索了一些方法,如使用像素缓冲对象和glMapBuffer,但似乎OpenGL ES 2.0不支持它们。
有没有比使用glReadPixels更快的访问帧缓冲区的方法?我需要只读访问帧缓冲区中的一个小矩形渲染区域,以便在CPU中进一步处理数据。由于需要反复执行此操作,因此性能很重要。我在网上搜索了一些方法,如使用像素缓冲对象和glMapBuffer,但似乎OpenGL ES 2.0不支持它们。
glReadPixels()
将其下拉。 glReadPixels()
快得多。我发现在我的iPhone 4上, glReadPixels()
是读取720p视频帧以编码到磁盘的瓶颈。它使得编码无法超过8-9 FPS。使用快速纹理缓存读取替换它现在允许我以20 FPS编码720p视频,瓶颈已从像素读取转移到了管道的OpenGL ES处理和实际电影编码部分。在iPhone 4S上,这允许您以30 FPS的完整速度编写1080p视频。我先配置我的AVAssetWriter,添加输入并配置像素缓冲区输入。以下代码用于设置像素缓冲区输入:
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
nil];
assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
一旦我有了这个,我会使用以下代码配置将呈现我的视频帧的FBO:
if ([GPUImageOpenGLESContext supportsFastTextureUpload])
{
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)[[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context], NULL, &coreVideoTextureCache);
if (err)
{
NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}
CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCache, renderTarget,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)videoSize.width,
(int)videoSize.height,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
}
这将从与我的资产写入器输入相关联的池中提取像素缓冲区,创建并关联一个纹理,并将该纹理用作我的FBO的目标。
渲染帧后,我锁定像素缓冲区的基地址:
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
然后将其直接输入到我的资源编写器中进行编码:
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);
if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime])
{
NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value);
}
else
{
// NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);
if (![GPUImageOpenGLESContext supportsFastTextureUpload])
{
CVPixelBufferRelease(pixel_buffer);
}
请注意,此处没有手动阅读任何内容。另外,纹理本来就是以BGRA格式本地存在的,这也是AVAssetWriters在编码视频时优化使用的格式,因此这里不需要进行任何颜色调整。原始的BGRA像素只需输入编码器即可生成电影。glReadPixels()
相比,在实践中它也经历了显着的加速,尽管比我使用AVAssetWriter的像素缓冲池看到的要小得多。...
let k = bounds.height / bounds.width
let vw = 720.0 // 1080.0
let vh = vw * k
let pixelBufferWidth = Int(vw) - Int(vw) % 16
let pixelBufferHeight = Int(vh) - Int(vh) % 16
...
var pixelBuffer:CVPixelBuffer? = nil
let ioSurfaceProperties = [:] as [String : Any] as CFDictionary
let pixelBufferAttributes = [
String(kCVPixelBufferIOSurfacePropertiesKey) : ioSurfaceProperties // <-- IMPORTANT
] as [String : Any] as CFDictionary
let cvReturn1 = CVPixelBufferCreate(kCFAllocatorDefault,
pixelBufferWidth,
pixelBufferHeight,
kCVPixelFormatType_32BGRA,
pixelBufferAttributes,
&pixelBuffer)
guard cvReturn1 == 0,
let sourceImage = pixelBuffer
else{
fatalError("createCVTextureCache fail CVPixelBufferCreate(nil, \(pixelBufferWidth), \(pixelBufferHeight), \(kCVPixelFormatType_32BGRA), \(pixelBufferAttributes)) return non zero (\(cvReturnToString(cvReturn1)))")
}
...
....
var cvGlTexture:CVOpenGLESTexture? = nil
let cvReturn2 = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
textureCache,
sourceImage,
nil, //textureAttributes: CFDictionary
GLenum(GL_TEXTURE_2D),
GL_RGBA,
GLsizei(w), //width: GLsizei
GLsizei(h), //height: GLsizei
GLenum(GL_BGRA), //format: GLenum
GLenum(GL_UNSIGNED_BYTE), //type: GLenum
0, //planeIndex: Int
&cvGlTexture)//textureOut: unsafeMutablePointer<CVOpenGLESTexture?>
guard cvReturn2 == 0,
let cvTexture = cvGlTexture
else{
fatalError("createCVTextureCache fail CVOpenGLESTextureCacheCreateTextureFromImage(...) return non zero (\(cvReturnToString(cvReturn2)))")
}
GLView.cvTextureTarget = CVOpenGLESTextureGetTarget(cvTexture)
GLView.cvTextureName = CVOpenGLESTextureGetName(cvTexture)
GLView.cvTexture = cvTexture // <--- IMPORTANT (RETAIN)
....
CVPixelBufferLockBaseAddress
之前,您是否需要调用glFinish
或glFlush
以确保GPU没有在帧缓冲区中仍在处理命令? - Mark Ingram-appendPixelBuffer:withPresentationTime:
之前必须使用glFinish()
(在iOS中,glFlush()
在许多情况下都不会阻塞) 。否则,在录制视频时,您将看到屏幕撕裂。虽然我在上面的代码中没有它,但它作为渲染例程的一部分在此之前被调用,我在使用上述框架时也是如此。 - Brad LarsonCVOpenGLESTextureCacheCreateTextureFromImage
创建一个没有alpha通道的3通道RGB/BGR纹理是否可行?我似乎找不到正确的标志(https://dev59.com/RYXca4cB1Zd3GeqPJICl)。 - Adi Shavit