I implemented reflections using parallax corrected prefiltered cubemaps. So basically, with this system you place probes and optional associated geometry volumes in the scene.

The probe is the point from which the 6 scene captures will be taken and the geometry volume is the geometric scene approximation that is used to perform the optional parallax correction (limited to AABB in my implementation). In my implementation, each mesh in the scene are manually associated with one probe. The idea is then to prefilter the cubemap for each level of roughness and store them in incrementing mipmaps. For the reflections to match the specular analytical lighting model we need to convolve the cube map with the same shape. This model is Cook-Torrance with GGX for the normal distribution in my case. To understand this integration in the discrete domain we can consider each cubemap texel as a light source. The brute force method would then be to iterate over all of them and compute the contribution according to the specular BRDF. But to reduce the number of texel we will only consider the ones that have a significative impact on the result using importance sampling. I learned about importance sampling in this GPU gems 3 chapter. One problem we have is that the Cook-Torrance shape change according to roughness and the viewing angle, this would add another dimension to our prefiltered cubemap. To visualize this shape, it is very instructive to use the disney BRDF explorer. Fortunately, clever approximation relying on a split sum and a 2D LUT texture exist. This whole process is explained with a lot of details in **Real Shading in Unreal Engine 4 and ****Moving Frostbite to Physically Based Rendering.** The biggest trade off of this approximation is that the integration is done with the viewing angle aligned to the normal (V = N), because of this we lost the stretching of the reflection at grazing angle. Finally, I also don’t have the indirect lighting from the sky in my probe captures. To fix this, I should do the capture pass twice: a first pass to generate the irradiance map and a second with the contribution applied.

So I precomputed all of this using compute shaders and binded a key to force a re-baking when needed. For now I can only say that it looks not too bad since I don’t have yet a system to compare my results with a reference. To be honest I’m not sure I will go that far since my primary goal with this was to get a better understanding of those concepts. This is how it looks in action.

I rely on 2 systems for the indirect lighting contribution: irradiance environment maps and single bounce indirect illumination. For the irradiance map, I simply reuse the 4×4 mipmap of the probes used for reflection since at this level the convolution is almost 180**°**. I use this for outdoor scene. This is not exact and I don’t have precomputed a per-fragment sky visibility factor, but it still better than no indirect lighting. For indoor (and outdoor), I pre-compute a per-vertex indirect light contribution using something in the line of reflective shadow maps (RSM). The idea is simple, each texel of the shadow map is considered as a new light source that you re-inject for the second bounce of light. So to reconstruct light from it, this RSM need to contain the normal and the color in addition to the depth. Computing this for all texels will be too expansive, so I simply uniformly select a subset of texels using the Hammersley distribution. I iterate over all the VBOs in the scene and do the lighting computation per vertex in a compute shader. Again I binded a key to trigger a re-baking. I selected this approach because it was simple, bake really quickly and that I have the feeling that I could derive a stable realtime version from it. At the final lighting stage, both indirect light contribution are multiplied by the SSAO factor. Well, at this point I’m pretty sure I break all energy conservation rules with those indirect lighting systems and that pixels are about to explode, but this is what I have for now.

This one was in the instant gratification category. A simple system reusing shadow maps that can bring convincing effects. Lighting participating media in the air is a little bit different than lighting an opaque surface. You have a lighting model that determine how much of the light is scattered toward the camera depending of the light direction and a forward scattering factor. The model I used is Mie scattering approximated with Henyey-Greenstein. Since we are dealing with a non-opaque media, we need to accumulate the result of each fragment between the shaded fragment and the camera. To do so, we ray march from the fragment position to the camera position and accumulate the result. At each step we look into the shadow maps to see if the light is blocked. The number of steps will determine the fidelity of the effect. About 100 steps are needed to obtain good quality. This can be optimized by taking random steps (about 16) followed by a blur and by performing the ray marching process at half resolution followed by a bilateral upsampling. I gathered informations on this subject in GPU Pro 5 – *Volumetric Light Effects in Killzone* and this excellent blog post.

Even a less contrasted outdoor light scattering with a little tint on the sun color bring a nice contribution.

I’m now thinking about releasing the source of this small sandbox, I guess it could be of some value for someone and it will open the door to comments and improvements. But first I will need to find time to prepare the source to make it as clear as possible, so it should be the topic of the next post.

Some takeaway from this experiment:

- The application of the Fresnel term and the proper energy conservation really have an impressive visual impact.
- Even if the instruction count is higher then the good old Blinn-Phong the performance hit is not that high on modern GPU.
- Authoring and calibrating textures is hard, I ended up using DDO to have a good reference.
- You really need some environment reflection (which I don’t have for now) for the metals to look good since they don’t have a diffuse component.
- Not really specific to the BRDF model but the high constrast specular highlights don’t come out to good on the Rift DK1, revealing even more the pixel grid. Cannot wait to see how it will perform with the DK2.
- Goodbye Blinn-Phong.

The next step will be to add some IBL to handle indirect lighting and radiance properly. Those more realistic materials mixed with the lack of irradiance really give the impression that we are on the moon.

**References:**

D3DBook:(Lighting) Cook-Torrance

Optimizing GGX Shaders with dot(L,H)

]]>Variance shadow maps propose a way to soften shadow edges by allowing the use of standard filtering methods such as hardware linear interpolation and gaussian blur directly on shadow maps. The results are convincing and many options are available to tune the quality / performance ratio.

The variance shadow map technique have a statistical approach to shadow filtering. The problem is formulated this way: *For a given point with a depth z, what is the percentage of points in a filtered shadow map that have a depth superior or equal to this z*. The answer to this can be found with the Chebyshev inequality:

is the percentage of points that will fail the depth test.P()is the depth.xis the fixed depth we are comparing to. σ2 is the variance (the standard deviation squared).tµis the mean.

The first important observation is that we are considering our shadow map to be filtered so after the filtering the depth value will become the *mean***. **Now, we need to find how to retrieve the variance from a filtered shadow map. For this we need to remember that the variance can also be expressed as the *mean of squares* minus the *squared mean***.**

σ2 =E[x2]- µ2

Knowing this, if we store the *depth squared* into our shadow maps, this value will become the *mean of squares *after filtering. Now that we know how to get all the terms needed for this inequation let’s see how the implementation will look like.

For the implementation I will assume that you already have basic shadow maps working. If you don’t, this tutorial is a very good starting point. So I will only highlight the implementation changes specific to VSM.

The first thing we need to change is the type of attachment for our shadow FBO. Since our depth pass now need to record both the depth and the squared depth we will attach a texture of type GL_RGB16F_ARB. Also, our depth attachment is not a texture but a RenderBuffer.

glGenFramebuffers(1, &frameBuffer); glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer); glGenTextures (1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F_ARB, 1024, 1024, 0, GL_RGB, GL_FLOAT, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureID, 0); glGenRenderbuffers(1, &depthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, depthBuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, 1024, 1024); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBuffer);

Don’t forget to disable the compare function from your shadow map texture parameters and to switch your sampler from *sampler2DShadow* to *sampler2D*:

//glTexParameteri(mGLTexturetype, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE); //glTexParameteri(mGLTexturetype, GL_TEXTURE_COMPARE_FUNC, GL_LESS);

The shader for your depth pass will look like this:

vec4 recordDepth() { float depth = gl_FragCoord.z; float moment2 = depth * depth; return vec4(depth, moment2, 0.0, 0.0); }

The you compute the percentage of shadow using the Chebyshev inequality this way:

vec4 shadowCoordNDC = vShadowCoord / vShadowCoord.w; vec2 moments = texture2D(ShadowMap, shadowCoordNDC.xy).rg; if (distance <= moments.x) return 1.0; float variance = moments.y - (moments.x * moments.x); variance = max(variance, 0.0000005); float d = shadowCoordNDC.z - moments.x; float shadowPCT = variance / (variance + d*d);

Note that since we are not using the GLSL *textureProj* function, we need to apply the division by *vShadowCoord.w* ourself to transform to normalized device coordinate.

**Linear interpolation**

Variance shadow maps suffer from light-bleeding when multiple shadows start to layer each other’s. This can be corrected by using a threshold on the returned *shadowPCT* value. In GLSL this can be done using the *smoothstep* function. To do so you can modify your shadow shader this way:

//float p_max = variance / (variance + d*d); float p_max = smoothstep(0.20, 1.0, variance / (variance + d*d));

For this particular scene, a minimum value of 0.20 was just enough to get rid of the light-bleeding. We can also observe that applying a threshold on the shadowPCT also have unfortunate effect of hardening the transition from shadow to light, creating less soft shadows. So you will need to experiment with your scene to find the right minimum value. Finally, even if it’s not directly related to VSM, we can add that the Gaussian blur we use create uniformly soft edges that don’t take into account the distance between the occluder and the receiver.

www.punkuser.net—vsm_paper.pdf

]]>