FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

How to hide a post-processed mesh outline when/where the mesh is hidden

I'm working on setting up an active outline in my 3d engine, a highlight effect for selected 3d characters or scenery in the screen. After working with the stencil buffer and getting some unsatisfactory results (issues with concave shapes, outline thickness due to distance from camera, and inconsistencies between my desktop and laptop), I switched to edge detection and frame buffer sampling and got an outline I'm pretty satisfied with.

However, I am not able to hide the outline when the selected mesh is behind another mesh. This makes sense given my process, since I simply render 2d shader outline from a frame buffer after rendering the rest of the scene.

Two screen captures of my results are below. The first is a "good" outline, the second is where the outline is seen over a mesh that blocks the outline source.

enter image description here

The rendering process runs like this: 1) Draw only the alpha of the highlighted mesh, capturing a black silhouette in a frame buffer (framebuffer1).

2) Pass the texture from framebuffer1 to a second shader that performs the edge detection. Capture edge in framebuffer2.

3) Render the entire scene.

4) Render the texture from framebuffer2 on top of the scene.

I have a few ideas on how to accomplish and am hoping to get feedback on their validity, or on simpler or better methods.

First, I've thought of rendering the entire scene to a frame buffer and storing the visible silhouette of the highlighted mesh in the alpha channel (all white save where the highlighted mesh is visible). I would then perform the edge detection on the alpha channel, render the scene frame buffer and then render the edge on top. Resulting in something like this:

result

To accomplish this, I thought of setting a define only during the render pass of the highlighted object that would draw all black in the alpha for any visible pixels.

My second idea is to use the current render process outlined above, but also store the X, Y and Z coordinates in the R, G and B channels of framebuffer1 when rendering the silhouette of the selected mesh. Edge detections would be performed and stored in framebuffer2, but I would pass on the RGB/XYZ values from the edges of the alpha to the silhouette. Then, when rendering the scene, I would test if the coordinate is within the edge stored in framebuffer2. If so, I would then test the depth of the current fragment to determine if it is in front of or behind the coordinates extracted from the RGB channels (converted to camera space). If the fragment is in front of the depth coordinates, the fragment would be rendered normally. If the fragment is behind, it would be rendered as the solid outline color. This seems like a more convoluted and error prone method...I haven't fully grasped packing and unpacking floats in OpenGL yet, but my feeling is I may run into floating point precision issues when trying to store the XYZ coordinates in the RGB channels.

I'm using LibGDX for this project and would like to support WebGL and OpenGL ES, so none of the solutions involving geometry shaders or newer GLSL functions are available to me. If anyone could comment on my proposed approaches or propose something better I'd really appreciate it.

❌