我认为我对Unity渲染引擎并不十分了解。
我使用RenderTexture生成屏幕截图(之后需要进行管理):
screenshotRenderTexture = new RenderTexture(screenshot.width, screenshot.height, depthBufferBits, RenderTextureFormat.Default);
screenshotRenderTexture.Create();
RenderTexture currentRenderTexture = RenderTexture.active;
RenderTexture.active = screenshotRenderTexture;
Camera[] cams = Camera.allCameras;
System.Array.Sort(
cams,
delegate(Camera cam1, Camera cam2)
{
// It's easier than write float to int conversion that won't floor
// depth deltas under 1 to zero and will correctly work with NaNs
if (cam1.depth < cam2.depth)
return -1;
else if (cam1.depth > cam2.depth)
return 1;
else return 0;
}
);
foreach(Camera cam in cams)
{
cam.targetTexture = screenshotRenderTexture;
cam.Render();
cam.targetTexture = null;
}
screenshot.ReadPixels(new Rect(0, 0, textureWidth, textureHeight), 0, 0);
screenshot.Apply();
RenderTexture.active = currentRenderTexture;
然而,如果depthBufferBits为0,则渲染结果会出现各种z缓冲区错误(以错误的顺序渲染的内容)。
我大致了解什么是深度缓冲区。然而,我不明白的是——如果RenderTexture用于合并单个摄像机的渲染结果,为什么需要其中的深度缓冲区?这些抽象如何工作,确切地说——摄像机是否自己创建图像,然后将其提供给RenderTexture,还是摄像机使用RenderTexture的深度缓冲区?由于我遇到的问题(在同一个摄像机内按错误顺序渲染的内容),似乎是后者,但同时它又有点违反了C#级别上的这些抽象结构的常识。
最后,我能否在此基础上使用用于正常渲染的默认深度缓冲区?因为移动设备上每像素16位相当痛苦。
更新:
以下是我的尝试:
screenshotRenderTexture = new RenderTexture(
screenshot.width,
screenshot.height,
0,
RenderTextureFormat.Default
);
screenshotRenderTexture.Create();
RenderBuffer currentColorBuffer = Graphics.activeColorBuffer;
Graphics.SetRenderTarget(screenshotRenderTexture.colorBuffer, Graphics.activeDepthBuffer);
yield return new WaitForEndOfFrame();
Graphics.SetRenderTarget(currentColorBuffer, Graphics.activeDepthBuffer);
以下是我得到的内容:
SetRenderTarget can only mix color & depth buffers from RenderTextures. You're trying to set depth buffer from the screen.
UnityEngine.Graphics:SetRenderTarget(RenderBuffer, RenderBuffer)
<ScreenshotTaking>c__Iterator21:MoveNext() (at Assets/Scripts/Managers/ScreenshotManager.cs:126)
为什么不能混合屏幕的深度缓冲和RenderTexture的颜色缓冲?