OpenGL 投影纹理映射通过着色器实现

10

我正在尝试使用OpenGL 3+中的着色器实现简单的投影纹理映射方法。虽然网上有一些示例,但我在创建带有着色器的工作示例时遇到了麻烦。

我实际上计划使用两个着色器,一个用于正常场景绘制,另一个用于投影纹理映射。我有一个用于绘制场景的函数void ProjTextureMappingScene::renderScene(GLFWwindow *window),并使用glUseProgram()在着色器之间进行切换。正常绘图效果良好。然而,我不清楚如何在已经贴上纹理的立方体上渲染投影纹理。我是否需要使用模板缓冲区或帧缓冲对象(其余场景不应受影响)?

我还认为我的投影纹理映射着色器不正确,因为第二次渲染立方体时它会显示为黑色。此外,我尝试使用颜色进行调试,只有着色器的t分量似乎是非零的(所以立方体呈绿色)。我在下面的片段着色器中覆盖了texColor,仅供调试目的。

顶点着色器

#version 330

uniform mat4 TexGenMat;
uniform mat4 InvViewMat;

uniform mat4 P;
uniform mat4 MV;
uniform mat4 N;

layout (location = 0) in vec3 inPosition;
//layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;

out vec3 vNormal, eyeVec;
out vec2 texCoord;
out vec4 projCoords;

void main()
{
    vNormal = (N * vec4(inNormal, 0.0)).xyz;

    vec4 posEye    = MV * vec4(inPosition, 1.0);
    vec4 posWorld  = InvViewMat * posEye;
    projCoords     = TexGenMat * posWorld;

    // only needed for specular component
    // currently not used
    eyeVec = -posEye.xyz;

    gl_Position = P * MV * vec4(inPosition, 1.0);
}

片元着色器

#version 330

uniform sampler2D projMap;
uniform sampler2D gSampler;
uniform vec4 vColor;

in vec3 vNormal, lightDir, eyeVec;
//in vec2 texCoord;
in vec4 projCoords;

out vec4 outputColor;

struct DirectionalLight
{
    vec3 vColor;
    vec3 vDirection;
    float fAmbientIntensity;
};

uniform DirectionalLight sunLight;

void main (void)
{
    // supress the reverse projection
    if (projCoords.q > 0.0)
    {
        vec2 finalCoords = projCoords.st / projCoords.q;
        vec4 vTexColor = texture(gSampler, finalCoords);
        // only t has non-zero values..why?
        vTexColor = vec4(finalCoords.s, finalCoords.t, finalCoords.r, 1.0);
        //vTexColor = vec4(projCoords.s, projCoords.t, projCoords.r, 1.0);
        float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
        outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
    }
}

创建TexGen矩阵

biasMatrix = glm::mat4(0.5f, 0, 0, 0.5f,
                  0, 0.5f, 0, 0.5f,
                  0, 0, 0.5f, 0.5f,
                  0, 0, 0, 1);

    // 4:3 perspective with 45 fov
    projectorP = glm::perspective(45.0f * zoomFactor, 4.0f / 3.0f, 0.1f, 1000.0f);
    projectorOrigin = glm::vec3(-3.0f, 3.0f, 0.0f);
    projectorTarget = glm::vec3(0.0f, 0.0f, 0.0f);
    projectorV = glm::lookAt(projectorOrigin, // projector origin
                                    projectorTarget,     // project on object at origin 
                                    glm::vec3(0.0f, 1.0f, 0.0f)   // Y axis is up
                                    );
    mModel = glm::mat4(1.0f);
...
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mModel*mModelView);

再次渲染立方体

对于我来说,立方体的模型视图应该是什么还不清楚?应该使用幻灯片投影仪的视图矩阵(就像现在一样)还是普通的视景投影仪?当前立方体在场景视图中呈黑色(调试时呈绿色),正如它从滑动投影仪中看到的那样(我制作了一个切换热键,以便可以看到滑动投影仪的“视图”)。 立方体也随着视图移动。如何将投影投射到立方体本身上?

mModel = glm::translate(projectorV, projectorOrigin);
// bind projective texture
tTextures[2].bindTexture();
// set all uniforms
...
// bind VBO data and draw
glBindVertexArray(uiVAOSceneObjects);
glDrawArrays(GL_TRIANGLES, 6, 36);

切换主场景摄像机和幻灯片投影仪摄像机

if (useMainCam)
{
    mCurrent   = glm::mat4(1.0f);
    mModelView = mModelView*mCurrent;
    mProjection = *pipeline->getProjectionMatrix();
}
else
{
    mModelView  = projectorV;
    mProjection = projectorP;
}
1个回答

9

我已经解决了这个问题。其中一个问题是我混淆了两个摄像机系统中的矩阵(世界和投影纹理摄像机)。现在,当我为投影纹理映射部分设置统一变量时,我使用正确的MVP矩阵值 - 与我用于世界场景的相同矩阵。

glUniformMatrix4fv(iPTMProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iPTMNormalLoc, 1, GL_FALSE, glm::value_ptr(glm::transpose(glm::inverse(mCurrent))));
glUniformMatrix4fv(iPTMModelViewLoc, 1, GL_FALSE, glm::value_ptr(mCurrent));
glUniformMatrix4fv(iTexGenMatLoc, 1, GL_FALSE, glm::value_ptr(texGenMatrix));
glUniformMatrix4fv(iInvViewMatrix, 1, GL_FALSE, glm::value_ptr(invViewMatrix));

此外,invViewMatrix只是视图矩阵的逆矩阵,而不是模型视图矩阵(在我的情况下,由于模型为恒等矩阵,因此这没有改变行为,但是这是错误的)。对于我的项目,我只想选择性地渲染几个对象,使用投影纹理。为了做到这一点,对于每个对象,我必须确保当前的着色器程序是用glUseProgram(projectiveTextureMappingProgramID)为投影纹理设置的程序。接下来,我计算该对象所需的矩阵:

texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mView);

回到着色器,顶点着色器正确无误,只是我重新添加了当前对象的UV纹理坐标(inCoord)并将它们存储在texCoord中。

对于片段着色器,我更改了主函数以夹紧投影纹理,使其不重复(我无法使用客户端的GL_CLAMP_TO_EDGE),并且在投影仪未覆盖整个对象时,我还使用默认对象纹理和UV坐标(我还从投影纹理中删除照明,因为在我的情况下不需要):

void main (void)
{
    vec2 finalCoords    = projCoords.st / projCoords.q;
    vec4 vTexColor      = texture(gSampler, texCoord);
    vec4 vProjTexColor  = texture(projMap, finalCoords);
    //vec4 vProjTexColor  = textureProj(projMap, projCoords);
    float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));

    // supress the reverse projection
    if (projCoords.q > 0.0)
    {
        // CLAMP PROJECTIVE TEXTURE (for some reason gl_clamp did not work...)
        if(projCoords.s > 0 && projCoords.t > 0 && finalCoords.s < 1 && finalCoords.t < 1)
            //outputColor = vProjTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
            outputColor = vProjTexColor*vColor;
        else
            outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
    }
    else
    {
        outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
    }
}

如果你卡住了,或者由于某些原因无法让着色器工作,你可以查看《OpenGL 4.0 Shading Language Cookbook》(纹理章节)中的示例 - 直到我自己解决问题之前,我才发现这一点。
除了上述所有内容外,用于调试算法是否正常工作的一个巨大帮助是绘制投影相机的棱台(作为线框)。我使用了用于棱台绘制的着色器。片段着色器只分配了一个实心颜色,而下面列出了顶点着色器及其解释:
#version 330

// input vertex data
layout(location = 0) in vec3 vp;

uniform mat4 P;
uniform mat4 MV;
uniform mat4 invP;
uniform mat4 invMV;
void main()
{
    /*The transformed clip space position c of a
    world space vertex v is obtained by transforming 
    v with the product of the projection matrix P 
    and the modelview matrix MV

    c = P MV v

    So, if we could solve for v, then we could 
    genrerate vertex positions by plugging in clip 
    space positions. For your frustum, one line 
    would be between the clip space positions 

    (-1,-1,near) and (-1,-1,far), 

    the lower left edge of the frustum, for example.

    NB: If you would like to mix normalized device 
    coords (x,y) and eye space coords (near,far), 
    you need an additional step here. Modify your 
    clip position as follows

    c' = (c.x * c.z, c.y * c.z, c.z, c.z)

    otherwise you would need to supply both the z 
    and w for c, which might be inconvenient. Simply 
    use c' instead of c below.


    To solve for v, multiply both sides of the equation above with 

          -1       
    (P MV) 

    This gives

          -1      
    (P MV)   c = v

    This is equivalent to

      -1  -1      
    MV   P   c = v

     -1
    P   is given by

    |(r-l)/(2n)     0         0      (r+l)/(2n) |
    |     0    (t-b)/(2n)     0      (t+b)/(2n) |
    |     0         0         0         -1      |
    |     0         0   -(f-n)/(2fn) (f+n)/(2fn)|

    where l, r, t, b, n, and f are the parameters in the glFrustum() call.

    If you don't want to fool with inverting the 
    model matrix, the info you already have can be 
    used instead: the forward, right, and up 
    vectors, in addition to the eye position.

    First, go from clip space to eye space

         -1   
    e = P   c

    Next go from eye space to world space

    v = eyePos - forward*e.z + right*e.x + up*e.y

    assuming x = right, y = up, and -z = forward.
    */
    vec4 fVp = invMV * invP * vec4(vp, 1.0);
    gl_Position = P * MV * fVp;
}

制服的使用方法如下(确保使用正确的矩阵):
// projector matrices
glUniformMatrix4fv(iFrustumInvProjectionLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorP)));
glUniformMatrix4fv(iFrustumInvMVLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorV)));
// world camera
glUniformMatrix4fv(iFrustumProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iFrustumModelViewLoc, 1, GL_FALSE, glm::value_ptr(mModelView));

要获取视锥体顶点着色器所需的输入顶点,您可以按照以下方式获取坐标(然后将它们添加到顶点数组中):

glm::vec3 ftl = glm::vec3(-1, +1, pFar); //far top left
glm::vec3 fbr = glm::vec3(+1, -1, pFar); //far bottom right
glm::vec3 fbl = glm::vec3(-1, -1, pFar); //far bottom left
glm::vec3 ftr = glm::vec3(+1, +1, pFar); //far top right
glm::vec3 ntl = glm::vec3(-1, +1, pNear); //near top left
glm::vec3 nbr = glm::vec3(+1, -1, pNear); //near bottom right
glm::vec3 nbl = glm::vec3(-1, -1, pNear); //near bottom left
glm::vec3 ntr = glm::vec3(+1, +1, pNear); //near top right

glm::vec3   frustum_coords[36] = {
    // near
    ntl, nbl, ntr, // 1 triangle
    ntr, nbl, nbr,
    // right
    nbr, ftr, ntr,
    ftr, nbr, fbr,
    // left
    nbl, ftl, ntl,
    ftl, nbl, fbl,
    // far
    ftl, fbl, fbr,
    fbr, ftr, ftl,
    //bottom
    nbl, fbr, fbl,
    fbr, nbl, nbr,
    //top
    ntl, ftr, ftl,
    ftr, ntl, ntr
};

综上所述,看到效果的确很不错:

texture projection example image

如您所见,我应用了两个透视纹理,一个生化危机图像应用在了Blender的Suzanne猴子头部上,而另一个笑脸贴图则应用在了地板和小立方体上。您还可以看到小立方体被透视纹理局部覆盖,而其余部分则使用默认纹理。最后,您可以看到投影相机的绿色视锥线框 - 一切看起来都正确无误。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接