I started this little rendering sandbox about 1 year ago. I started this blog at the same time to share about my progress and finding. So 1 year later I now have what looks like a basic modern rendering sandbox integrating my studied subjects. I experimented with a lot of concepts since my last post. I will quickly summarize all of them and share all the references I used for my implementation.
Reflections
I implemented reflections using parallax corrected prefiltered cubemaps. So basically, with this system you place probes and optional associated geometry volumes in the scene.
The probe is the point from which the 6 scene captures will be taken and the geometry volume is the geometric scene approximation that is used to perform the optional parallax correction (limited to AABB in my implementation). In my implementation, each mesh in the scene are manually associated with one probe. The idea is then to prefilter the cubemap for each level of roughness and store them in incrementing mipmaps. For the reflections to match the specular analytical lighting model we need to convolve the cube map with the same shape. This model is Cook-Torrance with GGX for the normal distribution in my case. To understand this integration in the discrete domain we can consider each cubemap texel as a light source. The brute force method would then be to iterate over all of them and compute the contribution according to the specular BRDF. But to reduce the number of texel we will only consider the ones that have a significative impact on the result using importance sampling. I learned about importance sampling in this GPU gems 3 chapter. One problem we have is that the Cook-Torrance shape change according to roughness and the viewing angle, this would add another dimension to our prefiltered cubemap. To visualize this shape, it is very instructive to use the disney BRDF explorer. Fortunately, clever approximation relying on a split sum and a 2D LUT texture exist. This whole process is explained with a lot of details in Real Shading in Unreal Engine 4 and Moving Frostbite to Physically Based Rendering. The biggest trade off of this approximation is that the integration is done with the viewing angle aligned to the normal (V = N), because of this we lost the stretching of the reflection at grazing angle. Finally, I also don’t have the indirect lighting from the sky in my probe captures. To fix this, I should do the capture pass twice: a first pass to generate the irradiance map and a second with the contribution applied.
So I precomputed all of this using compute shaders and binded a key to force a re-baking when needed. For now I can only say that it looks not too bad since I don’t have yet a system to compare my results with a reference. To be honest I’m not sure I will go that far since my primary goal with this was to get a better understanding of those concepts. This is how it looks in action.
Indirect lighting
I rely on 2 systems for the indirect lighting contribution: irradiance environment maps and single bounce indirect illumination. For the irradiance map, I simply reuse the 4×4 mipmap of the probes used for reflection since at this level the convolution is almost 180°. I use this for outdoor scene. This is not exact and I don’t have precomputed a per-fragment sky visibility factor, but it still better than no indirect lighting. For indoor (and outdoor), I pre-compute a per-vertex indirect light contribution using something in the line of reflective shadow maps (RSM). The idea is simple, each texel of the shadow map is considered as a new light source that you re-inject for the second bounce of light. So to reconstruct light from it, this RSM need to contain the normal and the color in addition to the depth. Computing this for all texels will be too expansive, so I simply uniformly select a subset of texels using the Hammersley distribution. I iterate over all the VBOs in the scene and do the lighting computation per vertex in a compute shader. Again I binded a key to trigger a re-baking. I selected this approach because it was simple, bake really quickly and that I have the feeling that I could derive a stable realtime version from it. At the final lighting stage, both indirect light contribution are multiplied by the SSAO factor. Well, at this point I’m pretty sure I break all energy conservation rules with those indirect lighting systems and that pixels are about to explode, but this is what I have for now.
Volumetric light scattering
This one was in the instant gratification category. A simple system reusing shadow maps that can bring convincing effects. Lighting participating media in the air is a little bit different than lighting an opaque surface. You have a lighting model that determine how much of the light is scattered toward the camera depending of the light direction and a forward scattering factor. The model I used is Mie scattering approximated with Henyey-Greenstein. Since we are dealing with a non-opaque media, we need to accumulate the result of each fragment between the shaded fragment and the camera. To do so, we ray march from the fragment position to the camera position and accumulate the result. At each step we look into the shadow maps to see if the light is blocked. The number of steps will determine the fidelity of the effect. About 100 steps are needed to obtain good quality. This can be optimized by taking random steps (about 16) followed by a blur and by performing the ray marching process at half resolution followed by a bilateral upsampling. I gathered informations on this subject in GPU Pro 5 – Volumetric Light Effects in Killzone and this excellent blog post.
Even a less contrasted outdoor light scattering with a little tint on the sun color bring a nice contribution.
I’m now thinking about releasing the source of this small sandbox, I guess it could be of some value for someone and it will open the door to comments and improvements. But first I will need to find time to prepare the source to make it as clear as possible, so it should be the topic of the next post.
This is supercool, would love to see your source. Also, for your final shading is it just a simple sum like (reflectance + indirect + direct) or is it something more complicated?
Sorry, I missed that totally. Yes this is a simple sum. I will DM you for sources.
I’m very interested in your indirect lighting approach – did you ever release the sandbox code for review? Cheers, and thanks for the interesting writings.
Thanks! I have not released the sources, I don’t have much time right now to do this properly. But I want to share it, I will DM you for access.
Awesome writeup! Even better with all the references and links. Thanks for spending the time on this. I am also working on a PBR test and I’d love to see how your are solving certain things. Especially interested in your irradiance generation. Would be lovely if you’d also shoot me an invite.
Thanks! I sent you an invite.
Pingback: PMXE Materials
I find myself confused about where in the pipeline the environment map filtering happens. Great article, would love to see your code!
Please gitHub this! The screenshots look absolutely gorgeous and I would like to implement some of the eye-candy features myself using Ogl as well 🙂
This would literally be the only source for such visuals for Ogl on the entire internet, so I’m sure most people writing renderers would extremely appreciate it!
Cool stuff indeed! Sources? =D I’d like to take a look at the stuff you’ve coded.
The volumetric light scattering effect is nice and cool. I’m now learning how to implement it but examples i found is either directx version or old screen space post-process effect. Would be appreciated if you could share your source.