Lighting features exploration

I started this little rendering sandbox about 1 year ago. I started this blog at the same time to share about my progress and finding. So 1 year later I now have what looks like a basic modern rendering sandbox integrating my studied subjects. I experimented with a lot of concepts since my last post. I will quickly summarize all of them and share all the references I used for my implementation.

Reflections
I implemented reflections using parallax corrected prefiltered cubemaps. So basically, with this system you place probes and optional associated geometry volumes in the scene.
IBLVolume_annotated

Probes and associated geometry volumes.

The probe is the point from which the 6 scene captures will be taken and the geometry volume is the geometric scene approximation that is used to perform the optional parallax correction (limited to AABB in my implementation). In my implementation, each mesh in the scene are manually associated with one probe. The idea is then to prefilter the cubemap for each level of roughness and store them in incrementing mipmaps. For the reflections to match the specular analytical lighting model we need to convolve the cube map with the same shape. This model is Cook-Torrance with GGX for the normal distribution in my case. To understand this integration in the discrete domain we can consider each cubemap texel as a light source. The brute force method would then be to iterate over all of them and compute the contribution according to the specular BRDF. But to reduce the number of texel we will only consider the ones that have a significative impact on the result using importance sampling. I learned about importance sampling in this GPU gems 3 chapter. One problem we have is that the Cook-Torrance shape change according to roughness and the viewing angle, this would add another dimension to our prefiltered cubemap. To visualize this shape, it is very instructive to use the disney BRDF explorer. Fortunately, clever approximation relying on a split sum and a 2D LUT texture exist. This whole process is explained with a lot of details in Real Shading in Unreal Engine 4 and Moving Frostbite to Physically Based Rendering. The biggest trade off of this approximation is that the integration is done with the viewing angle aligned to the normal (V = N), because of this we lost the stretching of the reflection at grazing angle. Finally, I also don’t have the indirect lighting from the sky in my probe captures. To fix this, I should do the capture pass twice: a first pass to generate the irradiance map and a second with the contribution applied. 
IBL-convolution

Filtered cubemap with increasing roughness stored in mipmaps and the 2D LUT texture.

roughness

Result on spheres of increasing roughness.

So I precomputed all of this using compute shaders and binded a key to force a re-baking when needed. For now I can only say that it looks not too bad since I don’t have yet a system to compare my results with a reference. To be honest I’m not sure I will go that far since my primary goal with this was to get a better understanding of those concepts. This is how it looks in action.
Indirect lighting 
I rely on 2 systems for the indirect lighting contribution: irradiance environment maps and single bounce indirect illumination. For the irradiance map, I simply reuse the 4×4 mipmap of the probes used for reflection since at this level the convolution is almost 180°. I use this for outdoor scene. This is not exact and I don’t have precomputed a per-fragment sky visibility factor, but it still better than no indirect lighting. For indoor (and outdoor), I pre-compute a per-vertex indirect light contribution using something in the line of reflective shadow maps (RSM). The idea is simple, each texel of the shadow map is considered as a new light source that you re-inject for the second bounce of light. So to reconstruct light from it, this RSM need to contain the normal and the color in addition to the depth. Computing this for all texels will be too expansive, so I simply uniformly select a subset of texels using the Hammersley distribution. I iterate over all the VBOs in the scene and do the lighting computation per vertex in a compute shader. Again I binded a key to trigger a re-baking. I selected this approach because it was simple, bake really quickly and that I have the feeling that I could derive a stable realtime version from it. At the final lighting stage, both indirect light contribution are multiplied by the SSAO factor. Well, at this point I’m pretty sure I break all energy conservation rules with those indirect lighting systems and that pixels are about to explode, but this is what I have for now.
radiosity-compare

Indoor scene without (left) and with (right) per-vertex radiosity pre-computed using RSM.

Volumetric light scattering
This one was in the instant gratification category. A simple system reusing shadow maps that can bring convincing effects. Lighting participating media in the air is a little bit different than lighting an opaque surface. You have a lighting model that determine how much of the light is scattered toward the camera depending of the light direction and a forward scattering factor. The model I used is Mie scattering approximated with Henyey-Greenstein. Since we are dealing with a non-opaque media, we need to accumulate the result of each fragment between the shaded fragment and the camera. To do so, we ray march from the fragment position to the camera position and accumulate the result. At each step we look into the shadow maps to see if the light is blocked. The number of steps will determine the fidelity of the effect. About 100 steps are needed to obtain good quality. This can be optimized by taking random steps (about 16) followed by a blur and by performing the ray marching process at half resolution followed by a bilateral upsampling. I gathered informations on this subject in GPU Pro 5Volumetric Light Effects in Killzone and this excellent blog post.
 
Even a less contrasted outdoor light scattering with a little tint on the sun color bring a nice contribution.
light-scattering-compare

Outdoor scene without (left) and with (right) volumetric light scattering.

I’m now thinking about releasing the source of this small sandbox, I guess it could be of some value for someone and it will open the door to comments and improvements. But first I will need to find time to prepare the source to make it as clear as possible, so it should be the topic of the next post.

Playing with BRDF

Theory is one thing but there is nothing like practice and execution. It was time for me to spend sometime experimenting with more physically based BRDF models to get a better feel of  how they work. Just for fun I started with Cook-Torrance using the Beckmann distribution, but like everybody I ended up using the GGX distribution for the microfacet slope distribution (D).

Some takeaway from this experiment:

  • The application of the Fresnel term and the proper energy conservation really have an impressive visual impact.
  • Even if the instruction count is higher then the good old Blinn-Phong the performance hit is not that high on modern GPU.
  • Authoring and calibrating textures is hard, I ended up using DDO to have a good reference.
  • You really need some environment reflection (which I don’t have for now) for the metals to look good since they don’t have a diffuse component.
  • Not really specific to the BRDF model but the high constrast specular highlights don’t come out to good on the Rift DK1, revealing even more the pixel grid. Cannot wait to see how it will perform with the DK2.
  • Goodbye Blinn-Phong.

GGX-plastic

The next step will be to add some IBL to handle indirect lighting and radiance properly. Those more realistic materials mixed with the lack of irradiance really give the impression that we are on the moon.

References:

D3DBook:(Lighting) Cook-Torrance

Cook-Torrance BRDF (lecture on youtube)

Optimizing GGX Shaders with dot(L,H)

Game environments – Part A: renderingRemember Me

Everything is Shiny 

Variance shadow maps

Variance shadow maps

Variance shadow maps

Variance shadow maps propose a way to soften shadow edges by allowing the use of standard filtering methods such as hardware linear interpolation and gaussian blur directly on shadow maps. The results are convincing and many options are available to tune the quality / performance ratio.

Concept

The variance shadow map technique have a statistical approach to shadow filtering. The problem is formulated this way: For a given point with a depth z, what is the percentage of points in a filtered shadow map that have a depth superior or equal to this z. The answer to this can be found with the  Chebyshev inequality:

Chebyshev

P() is the percentage of points that will fail the depth test.
is the depth.
is the fixed depth we are comparing to.
σ2 is the variance (the standard deviation squared).
µ is the mean.

The first important observation is that we are considering our shadow map to be filtered so after the filtering the depth value will become the mean.  Now, we need to find how to retrieve the variance from a filtered shadow map. For this we need to remember that the variance can also be expressed as the mean of squares minus the squared mean.

σ2 = E[x2] - µ2

Knowing this, if we store the depth squared into our shadow maps, this value will become the mean of squares after filtering. Now that we know how to get all the terms needed for this inequation let’s see how the implementation will look like.

Basic implementation (opengl)

For the implementation I will assume that you already have basic shadow maps working. If you don’t, this tutorial is a very good starting point.  So I will only highlight the implementation changes specific to VSM.

The first thing we need to change is the type of attachment for our shadow FBO. Since our depth pass now need to record both the depth and the squared depth we will attach a texture of type GL_RGB16F_ARB. Also, our depth attachment is not a texture but a RenderBuffer.

glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glGenTextures (1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F_ARB, 1024, 1024, 0, GL_RGB, GL_FLOAT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureID, 0);
glGenRenderbuffers(1, &depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, 1024, 1024);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBuffer);

Don’t forget to disable the compare function from your shadow map texture parameters and to switch your sampler from sampler2DShadow to sampler2D:

//glTexParameteri(mGLTexturetype, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
//glTexParameteri(mGLTexturetype, GL_TEXTURE_COMPARE_FUNC, GL_LESS);

The shader for your depth pass will look like this:

vec4 recordDepth()
{ 
    float depth = gl_FragCoord.z;
    float moment2 = depth * depth;
    return vec4(depth, moment2, 0.0, 0.0);
}

The you compute the percentage of shadow using the Chebyshev inequality this way:

vec4 shadowCoordNDC = vShadowCoord / vShadowCoord.w;
vec2 moments = texture2D(ShadowMap, shadowCoordNDC.xy).rg;

if (distance <= moments.x)
    return 1.0;

float variance = moments.y - (moments.x * moments.x);
variance = max(variance, 0.0000005);
float d = shadowCoordNDC.z - moments.x;
float shadowPCT = variance / (variance + d*d);

Note that since we are not using the GLSL textureProj function, we need to apply the division by vShadowCoord.w ourself to transform to normalized device coordinate.

Results and options

(All results are rendered with a 1024×1024 shadow map)

No filtering

VSM no filtering.

This is what we get if we don’t perform any filtering on our shadow map. Not very interesting, we see very aliased shadow edges and some weird artifacts (near the base of the cube).

Linear interpolation

VSM linear interpolation

Slightly better but still not what we expect. After all variance shadow maps allow us to use any filtering, we should try to add more serious filtering.

Linear interpolation and 5×5 Gaussian blur

VSM linear interpolation and gaussian blur

Way better, the 5×5 Gaussian blur produce soft edges. But we still have this artifact at the base of the cube.

Linear interpolation and 5×5 Gaussian blur, floor value

VSM-linear+gaussian+floor

We can soften the depth precision artifact by setting a greater floor value to the variance in the shader. (variance = max(variance, 0.0005) )

Linear interpolation and 5×5 Gaussian blur, 32bit texture

VSM-linear+gaussian+32bit

By changing the shadow texture internal format to GL_RGB32F_ARB we can set back our floor value to a lower value and get rid of the artifact.

Far away, no correction

At some angle, the farther shadows can show aliasing artifacts.

At some angle, the farther shadows have aliasing artifacts.

Far away, anisotropic filtering

Using anisotropic filtering (x16) improve farther shadows a little bit.

Using anisotropic filtering (x16) improve farther shadows a little bit. You could also try to turn on mipmap generations.

Limitations and improvements

VSM_light-bleeding

VSM light-bleeding.

Variance shadow maps suffer from light-bleeding when multiple shadows start to layer each other’s. This can be corrected by using a threshold on the returned shadowPCT value. In GLSL this can be done using the smoothstep function. To do so you can modify your shadow shader this way:
//float p_max = variance / (variance + d*d);
 float p_max = smoothstep(0.20, 1.0, variance / (variance + d*d));
VSM_light-bleeding-correction

VSM light-bleeding correction.

For this particular scene, a minimum value of 0.20 was just enough to get rid of the light-bleeding. We can also observe that applying a threshold on the shadowPCT also have unfortunate effect of hardening the transition from shadow to light, creating less soft shadows. So you will need to experiment with your scene to find the right minimum value.  Finally, even if it’s not directly related to VSM, we can add that the Gaussian blur we use create uniformly soft edges that don’t take into account the distance between the occluder and the receiver.

References

www.punkuser.net—vsm_paper.pdf

GPU Gems 3

fabiensanglard.net—shadowmappingVSM

Opengl Development Cookbook