April 22, 2016

The lighting parser is a set of steps (including geometry rendering and full screen passes) that occur in a similar order and configuration every frame. In some ways it is like a higher level version of a "renderpass" (or frame buffer layout). Each "subpass" in the renderpass is like a step in the lighting parser process for evaluating the frame.

But how do we map the lighting parser steps onto render pass subpasses? Do we want to use one single huge render pass for everything? Or a number of small passes?

Declarative lighting parser

Since we're using a "declarative" approach for "frame buffer layouts", we could do the same for the lighting parser. Imagine a LightingParserConfiguration object that takes a few configuration variables (related to the current platform and quality mode, and also scene features) and produces the set of steps that the lighting parser will perform.

We could use the VariantFunctions interface for this, along with some structures to define the "attachment" inputs and outputs for each step. If we have a step (such as tonemapping) we just define the input attachments, output attachments, and temporary buffers required, and add the tone map function.

Then we would just need a small system that could collect all this information, and generate a RenderCore::FrameBufferDesc object from it.

This would allow us to have a very configurable pipeline, because we could just build up the list of steps as we need them, and from there generate everything we need. In effect, we would declare everything that is going to happen in the lighting parser before hand; and then the system would automatically calculate everything that needs to happen when we call Run().

This could also allow us to have a single render pass for the entire lighting parser process. But is that what we really want?

How many render passes?

There are a few things to consider:

  1. Compute shader usage
  2. frame-to-frame variations in the render pass
  3. parallization within the lighting parser

Compute shaders

We can't execute compute shaders during a render pass. vkCmdDispatch can only be used outside of a render pass. This is an issue because some of our lighting parser work current uses compute shaders (such as the tone mapping) -- effectively meaning that the render pass must be split there.

This could be converted to pixel shaders, I guess. But it would mean that we would have to either rewrite all of the compute shader stuff, or move it to a precalculation phase before the render pass.

Also, sometimes we might want to call vkCmdCopyImage to duplicate a buffer. Again, it can be emulated with pixel shaders, I guess.

Frame-to-frame variations

We don't really want the render pass to change from frame to frame, because the render pass is a parameter to Vulkan graphics pipelines. There is some flexibility for "compatibility" between render passes. However, what we don't want is a situation where a change related to one subpass effects the pipelines of another subpass. In that case, it would be better to use 2 separate render passes, so that changes in one stage of the pipeline don't effect other stages.

This could be an issue with postprocessing effects that are sometimes enabled (such as a radial blur when damaged, or refractions in water). We could always just skip a subpass by calling NextSubpass immediately, I guess. But having many optional subpasses might blunt the benefit of having a huge render pass, anyway.

Parallization within the lighting parser

Most of the steps in the lighting parser are not really very parallizable with respect to each other. For example, the steps before tonemapping must all be complete before tonemapping occurs, and the steps after tonemapping can't begin until tonemapping ends.

So, in this situation, it's not clear how much benefit there is in combining everything in a giant render pass.

Simple lighting parser

The lighting parser was originally intended to have a very simple structure involving just conditions and function calls. It represents a fundamentally procedural operation, so it makes sense to implement it as basic procedures. That seems like the clearest way to give it a clear order and structure.

However, moving to a more declarative pattern divorces us from that goal a little bit. That is, adding extra levels of abstraction within the lighting parser could also make it more difficult to understand and maintain.

Conclusion

So, all in all, it's not really clear what benefit there would be to have a single huge render pass. It seems more sensible to instead use a number of smaller render passes, one for each phase of the pipeline.

This would probably mean a structure such as this:

  1. initial opaque stuff
    • creating the gbuffer
  2. opaque light resolve
    • perform deferred lighting
    • ScreenSpaceReflections requires some compute shader work taking gbuffer as input and consumed during lighting resolve
  3. multiple separate render passes for translucent rendering
    • in here we have things like ocean rendering, ordered transparency methods, particles, etc
    • many of these methods require extra buffers and special requirements
  4. possibly MSAA resolve, post processing steps and tonemapping could be combined into a single render pass

The configurable lighting parser structure from above might still be useful for the post processing steps at the end.

This means that the "gbuffer" will only be an "attachment" during the very first render pass. In other passes, it can be a normal image view / shader resource.



blog comments powered by Disqus