如何使用ARMatteGenerator将CIFilter添加到MTLTexture?

4
我正在使用苹果的示例项目,与使用ARMatteGenerator生成可用作人员遮挡技术中的遮挡纹理的MTLTexture有关。
我想确定如何通过CIFilter运行生成的遮挡纹理。在我的代码中,我像这样“过滤”遮挡纹理;
func updateMatteTextures(commandBuffer: MTLCommandBuffer) {
    guard let currentFrame = session.currentFrame else {
        return
    }
    var targetImage: CIImage?
    alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer)
    dilatedDepthTexture = matteGenerator.generateDilatedDepth(from: currentFrame, commandBuffer: commandBuffer)
    targetImage = CIImage(mtlTexture: alphaTexture!, options: nil)
    monoAlphaCIFilter?.setValue(targetImage!, forKey: kCIInputImageKey)
    monoAlphaCIFilter?.setValue(CIColor.red, forKey: kCIInputColorKey)
    targetImage = (monoAlphaCIFilter?.outputImage)!
    let drawingBounds = CGRect(origin: .zero, size: CGSize(width: alphaTexture!.width, height: alphaTexture!.height))
    context.render(targetImage!, to: alphaTexture!, commandBuffer: commandBuffer, bounds: drawingBounds, colorSpace: CGColorSpaceCreateDeviceRGB())

}

当我将遮罩纹理和背景合成时,遮罩没有应用滤镜效果。以下是纹理的合成方式;
func compositeImagesWithEncoder(renderEncoder: MTLRenderCommandEncoder) {
    guard let textureY = capturedImageTextureY, let textureCbCr = capturedImageTextureCbCr else {
        return
    }

    // Push a debug group allowing us to identify render commands in the GPU Frame Capture tool
    renderEncoder.pushDebugGroup("CompositePass")

    // Set render command encoder state
    renderEncoder.setCullMode(.none)
    renderEncoder.setRenderPipelineState(compositePipelineState)
    renderEncoder.setDepthStencilState(compositeDepthState)

    // Setup plane vertex buffers
    renderEncoder.setVertexBuffer(imagePlaneVertexBuffer, offset: 0, index: 0)
    renderEncoder.setVertexBuffer(scenePlaneVertexBuffer, offset: 0, index: 1)

    // Setup textures for the composite fragment shader
    renderEncoder.setFragmentBuffer(sharedUniformBuffer, offset: sharedUniformBufferOffset, index: Int(kBufferIndexSharedUniforms.rawValue))
    renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureY), index: 0)
    renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureCbCr), index: 1)
    renderEncoder.setFragmentTexture(sceneColorTexture, index: 2)
    renderEncoder.setFragmentTexture(sceneDepthTexture, index: 3)
    renderEncoder.setFragmentTexture(alphaTexture, index: 4)
    renderEncoder.setFragmentTexture(dilatedDepthTexture, index: 5)

    // Draw final quad to display
    renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
    renderEncoder.popDebugGroup()
}

我该如何将 CIFilter 仅应用于由 ARMatteGenerator 生成的 alphaTexture?
1个回答

8

我认为您不想将CIFilter应用于alphaTexture。我假设您正在使用苹果的自定义渲染中的人物遮挡效果示例代码。如果您观看今年的将人物带入AR WWDC会话,他们会谈论使用ARMatteGenerator生成分割蒙版,这就是使用alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer)完成的操作。 alphaTexture是一个MTLTexture,它本质上是相机框架中检测到人类位置的alpha掩码(即在人类所在处完全不透明,在没有人类的地方完全透明)。

Apple documentation

将滤镜添加到 alpha 纹理不会过滤最终渲染的图像,而只会影响用于合成的遮罩。如果您正在尝试实现 您之前的问题 中链接的视频,则建议在进行合成的金属着色器中进行调整。在会话中,他们指出他们比较 dilatedDepthrenderedDepth 来确定是否应绘制虚拟内容或来自相机的像素:

fragment half4 customComposition(...) {
    half4 camera = cameraTexture.sample(s, in.uv);
    half4 rendered = renderedTexture.sample(s, in.uv);
    float renderedDepth = renderedDepthTexture.sample(s, in.uv);
    half4 scene = mix(rendered, camera, rendered.a);
    half matte = matteTexture.sample(s, in.uv);
    float dilatedDepth = dilatedDepthTexture.sample(s, in.uv);

    if (dilatedDepth < renderedDepth) { // People in front of rendered
        // mix together the virtual content and camera feed based on the alpha provided by the matte
        return mix(scene, camera, matte);
    } else {
        // People are not in front so just return the scene
        return scene
    }
}

很不幸,示例代码中的操作略有不同,但修改起来仍然相当容易。打开Shaders.metal文件。找到compositeImageFragmentShader函数。在函数末尾,您会看到half4 occluderResult = mix(sceneColor, cameraColor, alpha);。这本质上与我们上面看到的mix(scene, camera, matte);是相同的操作。我们根据分割遮罩决定是否使用场景像素还是摄像头像素。我们可以通过将cameraColor替换为表示颜色的half4来轻松地替换摄像头图像像素为任意rgba值。例如,我们可以使用half4(float4(0.0, 0.0, 1.0, 1.0))将分割遮罩内的所有像素涂成蓝色:
…
// Replacing camera color with blue
half4 occluderResult = mix(sceneColor, half4(float4(0.0, 0.0, 1.0, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;

Screencast

当然,您也可以应用其他效果。动态灰度静态效果非常容易实现。
在上述的compositeImageFragmentShader之上添加:
float random(float offset, float2 tex_coord, float time) {
    // pick two numbers that are unlikely to repeat
    float2 non_repeating = float2(12.9898 * time, 78.233 * time);

    // multiply our texture coordinates by the non-repeating numbers, then add them together
    float sum = dot(tex_coord, non_repeating);

    // calculate the sine of our sum to get a range between -1 and 1
    float sine = sin(sum);

    // multiply the sine by a big, non-repeating number so that even a small change will result in a big color jump
    float huge_number = sine * 43758.5453 * offset;

    // get just the numbers after the decimal point
    float fraction = fract(huge_number);

    // send the result back to the caller
    return fraction;
}

以下内容摘自@twostraws的ShaderKit

然后修改compositeImageFragmentShader为:

…
float randFloat = random(1.0, cameraTexCoord, rgb[0]);

half4 occluderResult = mix(sceneColor, half4(float4(randFloat, randFloat, randFloat, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;

你应该得到:

Static screencast

最后,调试器似乎很难跟上应用程序。对我来说,在运行附加的Xcode时,应用程序在启动后不久就会冻结,但通常在独立运行时很流畅。


绝对是我所能寻求的最全面的回复。谢谢您花时间详细说明。我甚至没有想到去调查 Shaders.metal 文件,但看到这个,我现在意识到过滤“哑光”并不是理想的方法。谢谢! - ZbadhabitZ

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接