视频作为openGLES纹理的应用

3

我有一个本地视频,想把它作为纹理传递给OpenGL着色器。

我知道有许多相关主题的帖子,一些很老或很奇怪,而且有些我无法使其正常工作。

听起来应该这样做:

  • 加载视频
  • 获取视频输出为CVPixelBuffer
  • 因此,围绕yuv vs rgb,CVOpenGLESTextureCacheCreateTextureFromImage vs glTexImage2D等方法各异。如果没有特定原因使用yuv,我宁愿坚持rgb。

我的代码能够呈现UIImages,但我无法将其调整为视频。

似乎现在建议使用CVOpenGLESTextureCacheCreateTextureFromImage而不是glTexImage2D将视频帧传递到openGL程序中。Some将视频输出缓冲区转换为图像,然后将其通过管道传递,但这听起来效率低下。

首先,以下是我如何获取视频像素缓冲区并将其传递给管理GL程序的视图(您可能可以跳过此步骤,因为我认为它可以正常工作):

import UIKit
import AVFoundation

class ViewController: UIViewController {
    // video things
    var videoOutput: AVPlayerItemVideoOutput!
    var player: AVPlayer!
    var playerItem: AVPlayerItem!
    var isVideoReady = false

    override func viewDidLoad() {
        super.viewDidLoad()
        self.setupVideo()
    }

    func setupVideo() -> Void {
        let url = Bundle.main.urlForResource("myVideoName", withExtension: "mp4")!

        let outputSettings: [String: AnyObject] = ["kCVPixelBufferPixelFormatTypeKey": Int(kCVPixelFormatType_32BGRA)]
        self.videoOutput = AVPlayerItemVideoOutput.init(pixelBufferAttributes: outputSettings)
        self.player = AVPlayer()
        let asset = AVURLAsset(url: url)


        asset.loadValuesAsynchronously(forKeys: ["playable"]) {
            var error: NSError? = nil
            let status = asset.statusOfValue(forKey: "playable", error: &error)
            switch status {
            case .loaded:
                self.playerItem = AVPlayerItem(asset: asset)
                self.playerItem.add(self.videoOutput)
                self.player.replaceCurrentItem(with: self.playerItem)
                self.isVideoReady = true
            case .failed:
                print("failed")
            case .cancelled:
                print("cancelled")
            default:
                print("default")
            }
        }
    }

    // this function is called just before that the openGL program renders
    // and can be used to update the texture. (all the GL program is already initialized at this point)
    func onGlRefresh(glView: OpenGLView) -> Void {
        if self.isVideoReady {
            let pixelBuffer = self.videoOutput.copyPixelBuffer(forItemTime: self.playerItem.currentTime(), itemTimeForDisplay: nil)
            glView.pixelBuffer = pixelBuffer
        }
    }
}

这似乎工作得很好,尽管我无法真正测试它 :)

所以现在我有一个 CVPixelBuffer (一旦视频加载完成)。如何将其传递给GL程序?

这段代码适用于一个CGImage?

    // textureSource is an CGImage?
    guard let textureSource = textureSource else { return }
    let width: Int = textureSource.width
    let height: Int = textureSource.height

    let spriteData = UnsafeMutablePointer<GLubyte>(calloc(Int(UInt(CGFloat(width) * CGFloat(height) * 4)), sizeof(GLubyte.self)))

    let colorSpace = textureSource.colorSpace!

    let spriteContext: CGContext = CGContext(data: spriteData, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width*4, space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
    spriteContext.draw(in: CGRect(x: 0, y: 0, width: CGFloat(width), height: CGFloat(height)), image: textureSource)

    glBindTexture(GLenum(GL_TEXTURE_2D), _textureId!)
    glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, GLsizei(width), GLsizei(height), 0, GLenum(GL_RGBA), UInt32(GL_UNSIGNED_BYTE), spriteData)

    free(spriteData)

但我无法理解如何有效地将其适应于 CVPixelBuffer

如果需要,我很乐意分享更多代码,但我认为这篇文章已经够长了 :)

========== 编辑 ==========

我已经查看了一堆库(都是从苹果的CameraRipple和Ray Wenderlich的教程复制而来),这里是到目前为止我的github库(我会保持它活跃以保留链接)。虽然不是理想的,但我不想在这里粘贴太多代码。我已经能够让一些视频纹理工作,但存在以下问题:

  • 颜色不正确
  • 模拟器中的显示与设备上的显示不同。在模拟器中,只显示视频的左半部分(并覆盖整个屏幕),并且存在一些垂直畸变。

模拟器问题似乎与XCode 8处于测试版有关,但我不确定...


你介意分享完整的代码吗? - zeus
嗨,你的问题怎么样了?我现在也处于同样的情况,问题是我不知道从哪里开始。 - Lysdexia
2个回答

2

不久之前,我也遇到了同样的问题,可以先从Apple提供的示例(CameraRipple)开始学习。

你需要做的是:

  1. 某种方式获取 CVPixelBufferRef(根据您的帖子 - 已完成)。此应重复接收以供OpenGL程序显示实时视频。
  2. 使用能够处理视频的着色器(我的意思是将yuv转换为普通颜色的着色器)。

例如:

    varying lowp vec2 v_texCoord;
    precision mediump float;

    uniform sampler2D SamplerUV;
    uniform sampler2D SamplerY;
    uniform mat3 colorConversionMatrix;

    void main()
    {
        mediump vec3 yuv;
        lowp vec3 rgb;

        // Subtract constants to map the video range start at 0
        yuv.x = (texture2D(SamplerY, v_texCoord).r - (16.0/255.0));
        yuv.yz = (texture2D(SamplerUV, v_texCoord).ra - vec2(0.5, 0.5));

        rgb =   yuv*colorConversionMatrix;

        gl_FragColor = vec4(rgb,1);

    }
  1. For displaying video Apple recommend to use next colorConversation matrix (i also use it)

    static const GLfloat kColorConversion709[] = {
        1.1643,  0.0000,  1.2802,
        1.1643, -0.2148, -0.3806,
        1.1643,  2.1280,  0.0000
    };
    
  2. and of cause how to display buffer on openGL as texture - u can use something like

       -(void)displayPixelBuffer:(CVPixelBufferRef)pixelBuffer
       {
        CVReturn err;
        if (pixelBuffer != NULL) {
        int frameWidth = (int)CVPixelBufferGetWidth(pixelBuffer);
        int frameHeight = (int)CVPixelBufferGetHeight(pixelBuffer);
    
        if (!_videoTextureCache) {
            NSLog(@"No video texture cache");
            return;
        }
        [self cleanUpTextures];
    
        //Create Y and UV textures from the pixel buffer. These textures will be drawn on the frame buffer
    
        //Y-plane.
        glActiveTexture(GL_TEXTURE0);
        err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _videoTextureCache, pixelBuffer, NULL,  GL_TEXTURE_2D, GL_LUMINANCE, frameWidth, frameHeight, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0, &_lumaTexture);
        if (err) {
            NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
        }
    
        glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    
        // UV-plane.
        glActiveTexture(GL_TEXTURE1);
        err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _videoTextureCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_LUMINANCE_ALPHA, frameWidth / 2, frameHeight / 2, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, 1, &_chromaTexture);
        if (err) {
            NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
        }
    
        glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    
        glEnableVertexAttribArray(_vertexBufferID);
        glBindFramebuffer(GL_FRAMEBUFFER, _vertexBufferID);
    
        CFRelease(pixelBuffer);
    
        glUniformMatrix3fv(uniforms[UNIFORM_COLOR_CONVERSION_MATRIX], 1, GL_FALSE, _preferredConversion);
    }
    }
    
  3. Do not forget to clean up texture

    -(void)cleanUpTextures
    {
        if (_lumaTexture) {
            CFRelease(_lumaTexture);
            _lumaTexture = NULL;
        }
        if (_chromaTexture) {
            CFRelease(_chromaTexture);
            _chromaTexture = NULL;
        }
        // Periodic texture cache flush every frame
        CVOpenGLESTextureCacheFlush(_videoTextureCache, 0);
    }
    

PS. 这不是关于Swift的问题,但我想转换Obj-C到Swift可能会有问题。


谢谢!我现在有类似的东西,但我想知道:1)为什么要使用两个带有YUV颜色的纹理,而不是像图片一样使用一个RGB纹理(在WebGL中,我可以附加一个输出RGB的单个视频纹理);2)你所说的“能够处理视频的着色器”是什么意思?从着色器的角度来看,纹理就是纹理,对吧? - Guig
我已经编辑了我的答案以反映我的当前进展。我有类似于你的代码,但我的颜色是错误的,看起来我无法捕获V通道... 如果你想看一下,这里是我附加纹理的地方,这里是着色器。 - Guig

0
关于颜色,您在refreshTextures()的每个部分末尾调用glUniform1i()时,错过了为uniform指定纹理的步骤。
func refreshTextures() -> Void {
    guard let pixelBuffer = pixelBuffer else { return }
    let textureWidth: GLsizei = GLsizei(CVPixelBufferGetWidth(pixelBuffer))
    let textureHeight: GLsizei = GLsizei(CVPixelBufferGetHeight(pixelBuffer))

    guard let videoTextureCache = videoTextureCache else { return }

    self.cleanUpTextures()

    // Y plane
    glActiveTexture(GLenum(GL_TEXTURE0))

    var err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, videoTextureCache, pixelBuffer, nil, GLenum(GL_TEXTURE_2D), GL_RED_EXT, textureWidth, textureHeight, GLenum(GL_RED_EXT), GLenum(GL_UNSIGNED_BYTE), 0, &lumaTexture)

    if err != kCVReturnSuccess {
        print("Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err)
        return
    }
    guard let lumaTexture = lumaTexture else { return }

    glBindTexture(CVOpenGLESTextureGetTarget(lumaTexture), CVOpenGLESTextureGetName(lumaTexture))
    glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GL_LINEAR)
    glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MAG_FILTER), GL_LINEAR)
    glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GLfloat(GL_CLAMP_TO_EDGE))
    glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GLfloat(GL_CLAMP_TO_EDGE))

    glUniform1i(_locations.uniforms.textureSamplerY, 0)


    // UV plane
    glActiveTexture(GLenum(GL_TEXTURE1))

    err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, videoTextureCache, pixelBuffer, nil, GLenum(GL_TEXTURE_2D), GL_RG_EXT, textureWidth/2, textureHeight/2, GLenum(GL_RG_EXT), GLenum(GL_UNSIGNED_BYTE), 1, &chromaTexture)

    if err != kCVReturnSuccess {
        print("Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err)
        return
    }
    guard let chromaTexture = chromaTexture else { return }

    glBindTexture(CVOpenGLESTextureGetTarget(chromaTexture), CVOpenGLESTextureGetName(chromaTexture))
    glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GL_LINEAR)
    glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MAG_FILTER), GL_LINEAR)
    glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GLfloat(GL_CLAMP_TO_EDGE))
    glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GLfloat(GL_CLAMP_TO_EDGE))

    glUniform1i(_locations.uniforms.textureSamplerUV, 1)
}

这里,制服的类型也被更正为

private struct Uniforms {
    var textureSamplerY = GLint()
    var textureSamplerUV = GLint()
}

现在看起来我们得到了正确的颜色。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接