March 31, 2016

There are has been a lot research on order independent transparency recently. There are a few screenshots comparing the following methods:

  • Sorted -- this mode sorts back-to-front per fragment. It's very expensive, but serves as a reference.
  • Stochastic -- as per nVidia's Stochastic Transparency research. This uses MSAA hardware to estimate the optical depth covering a given sample.
  • Depth weighted -- as per nVidia's other white paper, "A Phenomenological Scattering Model for Order-Independent Transparency." This is very cheap, and uses a fixed equation based on fragment depth to weight samples.

Sorted

Stochastic

Depth Weighted

Unordered

You can see the artifacts we get with stochastic modes clearly here. Also, you can see that the depth weighted mode is holding up well in some areas of the image, but produces weird results in other areas.

Here's a scene more similar to what might appear in a game:

Sorted

Stochastic

Depth Weighted

Here, the artifacts in the stochastic method are less obvious. Also, the volume and shape of the geometry is preserved well.

The depth weighted version looks ok when there are few layers of transparency. But as soon as we pile on a few layers, it quickly turns to mush. The shape of the trees has become lost, and we end up with a trees merging into other trees.

When rendering trees like this, we need to calculate the lighting on every layer (whereas, with alpha tested geometry we just calculate it once per pixel). This can become extremely expensive. One of the advantages of the stochastic method is we can estimate the optical depth in front of a layer, and fall back to cheaper lighting for layers that are heavily occluded.

Likewise, in all methods there some room to move some aspects of the lighting from per-layer to per-pixel (for example, the Phenomenological Scattering Model paper does the refraction lookup once per pixel regardless of the number of layers).



blog comments powered by Disqus