如何在使用OpenGLES 2.0的libgdx中实时将Android的YUV-NV21相机图像渲染到背景上?

24

与Android不同,我对GL/libgdx相对较新。我需要解决的任务是在libgdx中实时渲染Android相机的YUV-NV21预览图像到屏幕背景,这是一个多方面的问题。以下是主要关注点:

  1. Android相机的预览图像只能保证处于YUV-NV21空间(以及类似的YV12空间,其中U和V通道未交错但分组)。假设大多数现代设备将提供隐式RGB转换是非常错误的,例如最新的三星Note 10.1 2014版仅提供YUV格式。由于OpenGL中除非处于RGB模式下,否则无法将任何内容绘制到屏幕上,因此必须以某种方式转换颜色空间。

  2. libgdx文档中的示例(集成libgdx和设备相机)使用一个位于所有其他视图之下的Android surface view来使用GLES 1.1绘制图像。自2014年3月初起,由于过时并且几乎所有设备现在都支持GLES 2.0,libgdx已删除了对OpenGLES 1.x的支持。如果您尝试使用GLES 2.0进行相同的示例,则在图像上绘制的3D对象将半透明。由于后面的表面与GL无关,因此这实际上无法控制。禁用BLENDING / TRANSLUCENCY无效。因此,必须纯粹地在GL中渲染此图像。

  3. 这必须在实时进行,因此颜色空间转换必须非常快。使用Android位图进行软件转换可能会太慢。

  4. 作为副功能,相机图像必须从Android代码中访问,以便执行除在屏幕上绘制之外的其他任务,例如通过JNI将其发送到本地图像处理器。

这个问题是如何正确快速地完成此任务?
2个回答

85
简短的回答是将相机图像通道(Y,UV)加载到纹理中,并使用自定义片段着色器将这些纹理绘制到网格上,该着色器将为我们执行颜色空间转换。由于此着色器将在GPU上运行,因此速度比CPU快得多,肯定比Java代码快得多。由于此网格是GL的一部分,因此任何其他3D形状或精灵都可以安全地绘制在其上方或下方。
我从这个答案开始解决问题https://dev59.com/fmct5IYBdhLWcg3wZMfn#17615696。 我使用以下链接了解了一般方法:How to use camera view with OpenGL ES,它是为Bada编写的,但原则相同。那里的转换公式有点奇怪,所以我用维基百科文章 YUV Conversion to/from RGB 中的公式替换了它们。
以下是导致解决方案的步骤: YUV-NV21解释 Live images from the Android camera are preview images. The default color space (and one of the two guaranteed color spaces) is YUV-NV21 for camera preview. The explanation of this format is very scattered, so I'll explain it here briefly:
The image data is made of (width x height) x 3/2 bytes. The first width x height bytes are the Y channel, 1 brightness byte for each pixel. The following (width / 2) x (height / 2) x 2 = width x height / 2 bytes are the UV plane. Each two consecutive bytes are the V,U (in that order according to the NV21 specification) chroma bytes for the 2 x 2 = 4 original pixels. In other words, the UV plane is (width / 2) x (height / 2) pixels in size and is downsampled by a factor of 2 in each dimension. In addition, the U,V chroma bytes are interleaved.
Here is a very nice image that explains the YUV-NV12, NV21 is just U,V bytes flipped:

YUV-NV12

如何将此格式转换为RGB?

正如问题所述,如果在Android代码中完成此转换,将需要太多时间。幸运的是,它可以在GL着色器中完成,这是在GPU上运行的。这将使其运行非常快。

一般的想法是将我们图像的通道作为纹理传递给着色器,并以能进行RGB转换的方式呈现它们。为此,我们必须先将图像中的通道复制到可以传递给纹理的缓冲区中:

byte[] image;
ByteBuffer yBuffer, uvBuffer;

...

yBuffer.put(image, 0, width*height);
yBuffer.position(0);

uvBuffer.put(image, width*height, width*height/2);
uvBuffer.position(0);

然后,我们将这些缓冲区传递给实际的GL纹理:
/*
 * Prepare the Y channel texture
 */

//Set texture slot 0 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
yTexture.bind();

//Y texture is (width*height) in size and each pixel is one byte; 
//by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B 
//components of the texture
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE, 
    width, height, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);

//Use linear interpolation when magnifying/minifying the texture to 
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);

/*
 * Prepare the UV channel texture
 */

//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
uvTexture.bind();

//UV texture is (width/2*height/2) in size (downsampled by 2 in 
//both dimensions, each pixel corresponds to 4 pixels of the Y channel) 
//and each pixel is two bytes. By setting GL_LUMINANCE_ALPHA, OpenGL 
//puts first byte (V) into R,G and B components and of the texture
//and the second byte (U) into the A component of the texture. That's 
//why we find U and V at A and R respectively in the fragment shader code.
//Note that we could have also found V at G or B as well. 
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE_ALPHA, 
    width/2, height/2, 0, GL20.GL_LUMINANCE_ALPHA, GL20.GL_UNSIGNED_BYTE, 
    uvBuffer);

//Use linear interpolation when magnifying/minifying the texture to 
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);

接下来,我们渲染之前准备的网格(覆盖整个屏幕)。着色器将负责在网格上呈现绑定的纹理:
shader.begin();

//Set the uniform y_texture object to the texture at slot 0
shader.setUniformi("y_texture", 0);

//Set the uniform uv_texture object to the texture at slot 1
shader.setUniformi("uv_texture", 1);

mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();

最后,着色器接管了将纹理渲染到网格的任务。实现实际转换的片段着色器如下:

String fragmentShader = 
    "#ifdef GL_ES\n" +
    "precision highp float;\n" +
    "#endif\n" +

    "varying vec2 v_texCoord;\n" +
    "uniform sampler2D y_texture;\n" +
    "uniform sampler2D uv_texture;\n" +

    "void main (void){\n" +
    "   float r, g, b, y, u, v;\n" +

    //We had put the Y values of each pixel to the R,G,B components by 
    //GL_LUMINANCE, that's why we're pulling it from the R component,
    //we could also use G or B
    "   y = texture2D(y_texture, v_texCoord).r;\n" + 

    //We had put the U and V values of each pixel to the A and R,G,B 
    //components of the texture respectively using GL_LUMINANCE_ALPHA. 
    //Since U,V bytes are interspread in the texture, this is probably 
    //the fastest way to use them in the shader
    "   u = texture2D(uv_texture, v_texCoord).a - 0.5;\n" +
    "   v = texture2D(uv_texture, v_texCoord).r - 0.5;\n" +

    //The numbers are just YUV to RGB conversion constants
    "   r = y + 1.13983*v;\n" +
    "   g = y - 0.39465*u - 0.58060*v;\n" +
    "   b = y + 2.03211*u;\n" +

    //We finally set the RGB color of our pixel
    "   gl_FragColor = vec4(r, g, b, 1.0);\n" +
    "}\n"; 

请注意,我们使用相同的坐标变量 v_texCoord 访问 Y 和 UV 纹理,这是因为 v_texCoord 的值在 -1.01.0 之间,它会从纹理的一端缩放到另一端,而不是实际的纹理像素坐标。这是着色器最好的特性之一。
完整的源代码
由于 libgdx 是跨平台的,我们需要一个对象,在不同的平台上可以以不同的方式扩展,处理设备摄像头和渲染。例如,如果您可以获得 RGB 图像,则可能希望完全绕过 YUV-RGB 着色器转换。因此,我们需要一个设备摄像头控制器接口,每个不同的平台都将实现该接口:
public interface PlatformDependentCameraController {

    void init();

    void renderBackground();

    void destroy();
} 

这个接口的安卓版本如下(假设实时相机图像为1280x720像素):
public class AndroidDependentCameraController implements PlatformDependentCameraController, Camera.PreviewCallback {

    private static byte[] image; //The image buffer that will hold the camera image when preview callback arrives

    private Camera camera; //The camera object

    //The Y and UV buffers that will pass our image channel data to the textures
    private ByteBuffer yBuffer;
    private ByteBuffer uvBuffer;

    ShaderProgram shader; //Our shader
    Texture yTexture; //Our Y texture
    Texture uvTexture; //Our UV texture
    Mesh mesh; //Our mesh that we will draw the texture on

    public AndroidDependentCameraController(){

        //Our YUV image is 12 bits per pixel
        image = new byte[1280*720/8*12];
    }

    @Override
    public void init(){

        /*
         * Initialize the OpenGL/libgdx stuff
         */

        //Do not enforce power of two texture sizes
        Texture.setEnforcePotImages(false);

        //Allocate textures
        yTexture = new Texture(1280,720,Format.Intensity); //A 8-bit per pixel format
        uvTexture = new Texture(1280/2,720/2,Format.LuminanceAlpha); //A 16-bit per pixel format

        //Allocate buffers on the native memory space, not inside the JVM heap
        yBuffer = ByteBuffer.allocateDirect(1280*720);
        uvBuffer = ByteBuffer.allocateDirect(1280*720/2); //We have (width/2*height/2) pixels, each pixel is 2 bytes
        yBuffer.order(ByteOrder.nativeOrder());
        uvBuffer.order(ByteOrder.nativeOrder());

        //Our vertex shader code; nothing special
        String vertexShader = 
                "attribute vec4 a_position;                         \n" + 
                "attribute vec2 a_texCoord;                         \n" + 
                "varying vec2 v_texCoord;                           \n" + 

                "void main(){                                       \n" + 
                "   gl_Position = a_position;                       \n" + 
                "   v_texCoord = a_texCoord;                        \n" +
                "}                                                  \n";

        //Our fragment shader code; takes Y,U,V values for each pixel and calculates R,G,B colors,
        //Effectively making YUV to RGB conversion
        String fragmentShader = 
                "#ifdef GL_ES                                       \n" +
                "precision highp float;                             \n" +
                "#endif                                             \n" +

                "varying vec2 v_texCoord;                           \n" +
                "uniform sampler2D y_texture;                       \n" +
                "uniform sampler2D uv_texture;                      \n" +

                "void main (void){                                  \n" +
                "   float r, g, b, y, u, v;                         \n" +

                //We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE, 
                //that's why we're pulling it from the R component, we could also use G or B
                "   y = texture2D(y_texture, v_texCoord).r;         \n" + 

                //We had put the U and V values of each pixel to the A and R,G,B components of the
                //texture respectively using GL_LUMINANCE_ALPHA. Since U,V bytes are interspread 
                //in the texture, this is probably the fastest way to use them in the shader
                "   u = texture2D(uv_texture, v_texCoord).a - 0.5;  \n" +                                   
                "   v = texture2D(uv_texture, v_texCoord).r - 0.5;  \n" +


                //The numbers are just YUV to RGB conversion constants
                "   r = y + 1.13983*v;                              \n" +
                "   g = y - 0.39465*u - 0.58060*v;                  \n" +
                "   b = y + 2.03211*u;                              \n" +

                //We finally set the RGB color of our pixel
                "   gl_FragColor = vec4(r, g, b, 1.0);              \n" +
                "}                                                  \n"; 

        //Create and compile our shader
        shader = new ShaderProgram(vertexShader, fragmentShader);

        //Create our mesh that we will draw on, it has 4 vertices corresponding to the 4 corners of the screen
        mesh = new Mesh(true, 4, 6, 
                new VertexAttribute(Usage.Position, 2, "a_position"), 
                new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoord"));

        //The vertices include the screen coordinates (between -1.0 and 1.0) and texture coordinates (between 0.0 and 1.0)
        float[] vertices = {
                -1.0f,  1.0f,   // Position 0
                0.0f,   0.0f,   // TexCoord 0
                -1.0f,  -1.0f,  // Position 1
                0.0f,   1.0f,   // TexCoord 1
                1.0f,   -1.0f,  // Position 2
                1.0f,   1.0f,   // TexCoord 2
                1.0f,   1.0f,   // Position 3
                1.0f,   0.0f    // TexCoord 3
        };

        //The indices come in trios of vertex indices that describe the triangles of our mesh
        short[] indices = {0, 1, 2, 0, 2, 3};

        //Set vertices and indices to our mesh
        mesh.setVertices(vertices);
        mesh.setIndices(indices);

        /*
         * Initialize the Android camera
         */
        camera = Camera.open(0);

        //We set the buffer ourselves that will be used to hold the preview image
        camera.setPreviewCallbackWithBuffer(this); 

        //Set the camera parameters
        Camera.Parameters params = camera.getParameters();
        params.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);
        params.setPreviewSize(1280,720); 
        camera.setParameters(params);

        //Start the preview
        camera.startPreview();

        //Set the first buffer, the preview doesn't start unless we set the buffers
        camera.addCallbackBuffer(image);
    }

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {

        //Send the buffer reference to the next preview so that a new buffer is not allocated and we use the same space
        camera.addCallbackBuffer(image);
    }

    @Override
    public void renderBackground() {

        /*
         * Because of Java's limitations, we can't reference the middle of an array and 
         * we must copy the channels in our byte array into buffers before setting them to textures
         */

        //Copy the Y channel of the image into its buffer, the first (width*height) bytes are the Y channel
        yBuffer.put(image, 0, 1280*720);
        yBuffer.position(0);

        //Copy the UV channels of the image into their buffer, the following (width*height/2) bytes are the UV channel; the U and V bytes are interspread
        uvBuffer.put(image, 1280*720, 1280*720/2);
        uvBuffer.position(0);

        /*
         * Prepare the Y channel texture
         */

        //Set texture slot 0 as active and bind our texture object to it
        Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
        yTexture.bind();

        //Y texture is (width*height) in size and each pixel is one byte; by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B components of the texture
        Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE, 1280, 720, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);

        //Use linear interpolation when magnifying/minifying the texture to areas larger/smaller than the texture size
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);


        /*
         * Prepare the UV channel texture
         */

        //Set texture slot 1 as active and bind our texture object to it
        Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
        uvTexture.bind();

        //UV texture is (width/2*height/2) in size (downsampled by 2 in both dimensions, each pixel corresponds to 4 pixels of the Y channel) 
        //and each pixel is two bytes. By setting GL_LUMINANCE_ALPHA, OpenGL puts first byte (V) into R,G and B components and of the texture
        //and the second byte (U) into the A component of the texture. That's why we find U and V at A and R respectively in the fragment shader code.
        //Note that we could have also found V at G or B as well. 
        Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE_ALPHA, 1280/2, 720/2, 0, GL20.GL_LUMINANCE_ALPHA, GL20.GL_UNSIGNED_BYTE, uvBuffer);

        //Use linear interpolation when magnifying/minifying the texture to areas larger/smaller than the texture size
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);

        /*
         * Draw the textures onto a mesh using our shader
         */

        shader.begin();

        //Set the uniform y_texture object to the texture at slot 0
        shader.setUniformi("y_texture", 0);

        //Set the uniform uv_texture object to the texture at slot 1
        shader.setUniformi("uv_texture", 1);

        //Render our mesh using the shader, which in turn will use our textures to render their content on the mesh
        mesh.render(shader, GL20.GL_TRIANGLES);
        shader.end();
    }

    @Override
    public void destroy() {
        camera.stopPreview();
        camera.setPreviewCallbackWithBuffer(null);
        camera.release();
    }
}

主要应用程序部分只确保在开始时调用init(),每个渲染周期调用renderBackground(),并在结束时仅调用destroy()
public class YourApplication implements ApplicationListener {

    private final PlatformDependentCameraController deviceCameraControl;

    public YourApplication(PlatformDependentCameraController cameraControl) {
        this.deviceCameraControl = cameraControl;
    }

    @Override
    public void create() {              
        deviceCameraControl.init();
    }

    @Override
    public void render() {      
        Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);

        //Render the background that is the live camera image
        deviceCameraControl.renderBackground();

        /*
         * Render anything here (sprites/models etc.) that you want to go on top of the camera image
         */
    }

    @Override
    public void dispose() {
        deviceCameraControl.destroy();
    }

    @Override
    public void resize(int width, int height) {
    }

    @Override
    public void pause() {
    }

    @Override
    public void resume() {
    }
}

唯一其他与Android相关的部分是以下非常简短的主要Android代码,您只需创建一个新的Android特定设备相机处理程序并将其传递给主libgdx对象即可:
public class MainActivity extends AndroidApplication {

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration();
        cfg.useGL20 = true; //This line is obsolete in the newest libgdx version
        cfg.a = 8;
        cfg.b = 8;
        cfg.g = 8;
        cfg.r = 8;

        PlatformDependentCameraController cameraControl = new AndroidDependentCameraController();
        initialize(new YourApplication(cameraControl), cfg);

        graphics.getView().setKeepScreenOn(true);
    }
}

它有多快?

我在两台设备上测试了这个程序。虽然测量结果在帧之间不是恒定的,但可以观察到一个一般的轮廓:

Samsung Galaxy Note II LTE - (GT-N7105):配备ARM Mali-400 MP4 GPU。
  • 渲染一帧大约需要5-6毫秒,偶尔会在几秒钟内跳到约15毫秒
  • 实际渲染行(mesh.render(shader, GL20.GL_TRIANGLES);)始终需要0-1毫秒
  • 两个纹理的创建和绑定总共需要1-3毫秒
  • ByteBuffer拷贝通常总共需要1-3毫秒,但偶尔会跳到约7毫秒,可能是由于图像缓冲区在JVM堆中移动导致的
Samsung Galaxy Note 10.1 2014 - (SM-P600):配备ARM Mali-T628 GPU。
  • 渲染一帧大约需要2-4毫秒,很少跳到约6-10毫秒
  • 实际渲染行(mesh.render(shader, GL20.GL_TRIANGLES);)始终需要0-1毫秒
  • 两个纹理的创建和绑定总共需要1-3毫秒,但偶尔会在几秒钟内跳到约6-9毫秒
  • ByteBuffer拷贝通常总共需要0-2毫秒,但极少跳到约6毫秒
请不要犹豫,如果您认为这些配置文件可以通过其他方法更快地实现,请分享。希望这个小教程能够帮助到您。

谢谢你的精彩帖子!我刚在我的 Nexus 4 上尝试了一下,但不幸的是只有一个绿色屏幕被渲染出来。你知道这是什么原因吗?如果你能帮助我就太好了! - florianbaethge
我们的一些设备也遇到了同样的问题,看起来是设备特定的问题。我们正在进行调查。 - Ayberk Özgür
好的,谢谢... :) 一个解决方法是通过执行setPreviewTexture操作来启动循环...但仍然奇怪的是这在一些设备上会发生。 - florianbaethge
1
我已经尝试了您的方法,但是在renderBackground()之后添加的所有纹理都被相机背景填充了。这是我得到的截图:https://pp.vk.me/c625130/v625130313/d25b/G24Owy2quY0.jpg - knizhnikov
4
这是一个非常好的解释和例子,谢谢! - Austin
显示剩余7条评论

5

为了达到最快和最优化的方式,只需使用常见的GL扩展。

//Fragment Shader
#extension GL_OES_EGL_image_external : require
uniform samplerExternalOES u_Texture;

比在Java中
surfaceTexture = new SurfaceTexture(textureIDs[0]);
try {
   someCamera.setPreviewTexture(surfaceTexture);
} catch (IOException t) {
   Log.e(TAG, "Cannot set preview texture target!");
}

someCamera.startPreview();

private static final int GL_TEXTURE_EXTERNAL_OES = 0x8D65;

在Java中,GL线程

GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, textureIDs[0]);
GLES20.glUniform1i(uTextureHandle, 0);

颜色转换已经为您完成。您可以在片段着色器中自由操作。

总的来说,这不是一个Libgdx解决方案,因为它依赖于平台。您可以在包装器中初始化平台相关内容,然后将其发送到Libgdx Activity中。

希望这能节省您在研究中的一些时间。


1
当你使用openCV时,不需要担心基础知识。他们有一个很好的API可以直接使用。 如果你想进行一些对象识别,这将是最好的方法。你可以用自己的计算着色器来完成这个任务,在每一帧之后提取解决的值,但你必须非常擅长数学才能比经过长时间开发的OpenCV库更好。如果你只想改变视觉效果,只需使用SurfaceTexture即可。还有一个glReadPixels()的读取选项,但如果需要性能,我不建议使用它。 - fky
我对OpenCV并不是非常熟悉,但尝试让OpenCV在GPU上进行转换可能是个好主意(我听说有这样的能力),然后以某种方式在屏幕上呈现图像。问题是,OpenCV处理是在一个外部库中完成的,需要一个普通的字节数组,而我实际上不能更改它。 - Ayberk Özgür
请告诉我,你的目标是什么。你想从数据中获得什么?感觉你想用大锤敲个小坚果 ;) - fky
1
好的,这里是:我获取实时摄像头图像,将其转换为RGB并显示在屏幕上,同时将其发送到本地代码中,在后台使用OpenCV进行一些图像处理,并将一些简单结果返回给Java代码。所述的本地代码位于由其他人开发的另一个库中。不用担心,当前的解决方案运行良好且足够快,我只是想在这个帖子中分享我的解决方案 :) - Ayberk Özgür
1
我使用OpenCV的方式是绕过整个OpenCV4Android SDK/ndk-build,使用cmake和独立的NDK工具链构建实际的OpenCV代码到共享库中,在运行时动态加载。这样就完全摆脱了OpenCV Manager和ndk-build过程,它们各自有优点和缺点。如果您想要,我可以指向我的关于这个问题的论证/抱怨,它将在几天内在公共场所发布。简而言之,除非绝对必要,否则我不想在教程代码中引入OpenCV依赖项。 - Ayberk Özgür
显示剩余4条评论

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接