Probability Theory for Physically Based Rendering Part 2

This is article 2 in a series of 2. Article 1 can be found here.

Integrals in Physically Based Rendering

To calculate light transport in a virtual scene we use Kajiya’s rendering equation:

L(s\leftarrow x)=G(s\leftrightarrow x)\left [L_e\left (s\leftarrow x \right )+\int_{\Omega}f_r\left (s\leftarrow x\leftarrow y\right )L\left (x\leftarrow y\right )dy \right ] (1)

This is the three point formulation of the rendering equation. It describes that light arriving at point s from point x is the sum of light emitted by x towards s, plus the light reflected by x towards s.

The amount of energy that is transported from point x towards s depends on several factors:

  • geometry term G(s\leftrightarrow x): if s and x are mutually visible this is \frac{1}{||s-x|| ^2 }, otherwise it is 0;
  • emittance L_e(s\leftarrow x): only if x is on a light emitting surface, this is greater than 0;
  • irradiance L(x\leftarrow y): this is the light that x receives from a third point y, which it may reflect towards s;
  • reflectance f_r(s\leftarrow x\leftarrow y): this is the bidirectional reflectance distribution function (BRDF), which defines the relation between incoming and outgoing energy for a pair of directions.

The rendering equation contains an integral: light may arrive at x from any direction over the hemisphere (\Omega) over x. The integral is recursive: light arriving from y is either emitted by y, or reflected by y, so L(x\leftarrow y) is calculated using the same formula. Recursion typically terminates when we find an emissive surface: if L_e(s\leftarrow x)>0 we do not evaluate the integral.

Several alternative formulations of the rendering equation exist. One example is the hemispherical formulation:

\tag{2}L_o\left (x,\omega_o\right )=L_e\left (x,\omega_o\right )+\int_{\Omega}f_r\left (x,\omega_i,\omega_o\right )L_i\left (x,\omega_i\right )\left (\omega_i\cdot n\right )d\omega_i

This states that the light leaving x in direction \omega_o is the light emitted by x in direction \omega_o, plus the light reflected by x in direction \omega_o. Reflected light is again an integral: radiance arriving from all directions \omega_i over the hemisphere over x, converted to irradiance by the \omega_i\cdot n factor (where n is the surface normal at x), and scaled by the BRDF f_r(x,\omega_i,\omega_o).

Another interesting formulation is the transport formulation:

\tag{3}L=L_e+TL

Which leaves out a lot of details, and simply states that the light leaving a point is the light emitted by the point, plus the light transported by the point. The transported light can either be emitted by a third point, or it can be transported by that third point, so:

\tag{4} L=L_{e_1}+TL_{e_2}+TTL_{e_3}+...=L_{e_1}\sum_{i=0}^{\infty}T^i L_{e_{i+1}}

Practical: a ray tracer (as well as a path tracer) creates images by sending rays from the camera lens into the world. The color of a pixel is determined by the light transported from the first surface point hit by a ray. The above equations allow us to calculate this light accurately.

Path Tracing

The path tracing algorithm uses Monte Carlo integration to evaluate the rendering equation. The basic algorithm operates as follows:

  1. A primary ray is constructed, which starts at the lens, and extends through a pixel.
  2. The end of this ray is the first scene surface that the ray intersects.
  3. This surface point may emit energy (L_e>0), or reflect energy (L_e=0). The reflected energy is evaluated by taking a single sample.
  4. Taking a single sample is implemented by generating a random direction \omega_i over the hemisphere and sending a new ray in that direction to probe incoming energy. This effectively extends the path by one segment. With the newly generated ray, the algorithm continues at step 2, until it terminates on a light emitting surface or leaves the scene.

For a single path, this yields a sample with high variance. This is resolved by sending many paths through each pixels.

One path is now effectively one sample of the rendering equation. Note the similarity with particle transport: a single path bears strong resemblance to the behavior of a photon bouncing around the scene. An obvious distinction is the direction of the path as a whole: photons originate at light sources; path tracing operates in reverse. According to the Helmholtz reciprocity principle, both directions are equivalent.

Also note the dimensionality of a path sample: for a single bounce, we need two random numbers to generate a direction on the hemisphere, i.e. D=2. At two bounces, D=4 and so on. Additional dimensions are needed to sample the area of the pixel, the area of the lens, and a time slice. The curse of dimensionality strikes hard, rendering Riemann sums useless.

Figure 1 shows a path traced scene. The surface point in the center of the red crosshairs is, like all other pixels of the image, at the end of a camera ray. This surface point x reflects light towards the camera sensor s, as described by the rendering equation: this is a combination of light arriving from the light source, light arriving from the sky, and light arriving via other surfaces in the scene.

Figure 1: path traced scene. The hemisphere visualized on the right is taken from the perspective of the surface point in the crosshairs.

The light reflected by x towards the camera arrives over the hemisphere over x. The figure shows this hemisphere, and next to it an enlarged version of (two sides of) this hemisphere.

Monte-Carlo Integration of the Hemisphere

We can evaluate the integral over the hemisphere using Monte Carlo integration. A naive implementation of this is straight-forward: we generate two random numbers, which we use to determine a direction \omega_i towards the hemisphere, and sample incoming light from that direction.

Combining the Monte Carlo integrator from the first article with the integral over the hemisphere of the rendering equation yields:

\int_{\Omega}f_r\left (x,\omega_i,\omega_o\right )L_i\left (x,\omega_i\right )\left (\omega_i\cdot n\right )d\omega_i\approx \frac{A}{n}\sum_{k=1}^{n}f_r(x,\omega_i,\omega_o)L_i(x,\omega_k)(\omega_k\cdot n) (5)

where \omega_o is the direction towards the camera, \omega_k is a random direction on the hemisphere, and A is the surface area of a hemisphere of radius 1, which is simply 2\pi.

Using Monte Carlo with importance sampling (article 1, Equation 7) we get:

\int_{\Omega}f_r\left (x,\omega_i,\omega_o\right )L_i\left (x,\omega_i\right )\left (\omega_i\cdot n\right )d\omega_i\approx \frac{A}{n}\sum_{k=1}^{n}\frac{f_r(x,\omega_i,\omega_o)L_i(x,\omega_k)(\omega_k\cdot n)}{p(\omega_k)} (6)

In other words: not all random incoming directions \omega_k need to have the same chance of being selected; we can use a pdf to focus our efforts.

Importance Sampling the Hemisphere: Cosine

Looking at the integral, even if we know nothing about f_r and L_i, we can at least conclude one thing: the term that is being summed is proportional to \omega_k\cdot n. This is called the cosine term, and accounts for the fact that a beam of incoming light (radiance) is distributed over a larger area when it arrives at a greater angle from the normal, yielding less energy per unit area (irradiance). In the absence of other information, this cosine term makes a good pdf.

Figure 2: plot of p(x)=\frac{\cos\theta}{2}.

The raw cosine requires normalization: in 3D, it integrates to \pi. We now have our pdf: p(x)=\frac{\omega_k\cdot n}{\pi}. Remains the question how we pick a random \omega_k proportional to this pdf. The Global Illumination Compendium provides an answer: given two uniform random numbers r_1 and r_2, we obtain a cosine weighted random vector \omega_k with

\tag{7}x=\cos \left (2\pi r_1 \right )\sqrt{1-r_2}\\y=\sin \left (2\pi r_1\right )\sqrt{1-r_2}\\z=\sqrt{r_2}

Note that the fact that we can sample a direction proportional to a pdf directly is a quite rare case. Also note that, at least for the \omega_k\cdot n term, this pdf is perfect: it is not an approximation of the function we are trying to integrate.

Importance Sampling the Hemisphere: Direct Illumination

Figure 3 shows a latitude-longitude version of the hemisphere we used before.

Figure 3: latitude-longitude plot of (the magnitude of) the radiance arriving over the hemisphere of the surface point.

Incoming radiance at x is dominated by direct light (from the light source, and to a lesser extent, from the sky). We would thus like to focus on directions that will hit the light source.

This time, a pdf is not easily created. However, we can use a trick. Recall the pdf in Figure 10 of part 1, where half of the domain received 70% of the samples. We can do something similar for direct illumination. Earlier we defined the full light transport as:

\tag{8}L=L_{e_1}+TL_{e_2}+TTL_{e_3}+...=L_{e_1}\sum_{i=0}^{\infty}T^i L_{e_{i+1}}

Applied to point x, the term TL_{e_2} represents all direct illumination arriving at x (L_{e_1} is the light emitted by x). The subsequent terms TTL_{e_3}+TTTL_{e_4}+... represent indirect illumination. We can thus evaluate direct and indirect illumination separately, using two distinct integrals, and sum the result.

Figure 4: 2D visualization of the hemisphere. Left: the red shape represents all incoming light at x. Center: the direct light component of the incoming light. Right: the indirect component of the incoming light.

This is illustrated in Figure 4. In the top row, the left image shows a 2D representation of the hemisphere. The red shape indicates the magnitude of the illumination: point x is lit from all directions, but most of the energy is coming from the light source (blue). The center image shows the direct illumination, and the right image the indirect illumination. The sum of these is the original integral.

Obtaining the sum of just the indirect illumination is straight-forward: we pick a random direction on the hemisphere, and when it happens to hit a light source, we ignore its result. This seems like a waste, but lights typically do not occupy a significant portion of the hemisphere. Since we pick each direction with the same probability as we did before, we can use the pdf we used before.

Obtaining the sum of just the direct illumination is also straight-forward: we pick a random direction towards the light, by aiming for a random point on the light. Note that this time, we definitely do not pick each direction with the same probability: we thus need a new pdf. This is a pdf that is zero in most places, and constant over the area of the light source projected on the hemisphere. The constant must be chosen so that the pdf integrates to 1. This is the case if the constant is 1/SA, where SA is the area of the light source projected on the hemisphere, the solid angle:

\tag{9}SA\approx\frac{A_{light}\left (n_{light}\cdot \omega_i\right )}{dist^2}

Modified Path Tracing Algorithm

With the discussed importance sampling techniques we can now formulate the improved path tracing algorithm:

  1. A primary ray is constructed, which starts at the lens, and extends through a pixel.
  2. The end of this ray is the first scene surface that the ray intersects.
  3. This surface point may emit energy (L_e>0), or reflect energy (L_e=0). The reflected energy is evaluated by summing two samples: one for the direct illumination, and one for the indirect illumination.
  4. To sample the direct illumination, we pick a random point on the light source, and probe this direction using a ray. The result is divided by the pdf, which is 1/SA.
  5. To sample the indirect illumination, we generate a random direction proportional to the pdf: (\omega_i\cdot n)/\pi, and continue with step 2 with a ray in this direction. The result is divided by the pdf.

Synchronize

At this point, the following concepts should be clear:

  • The importance sampled version of the Monte Carlo integrator (part 1, Equation 7) yields correct results for any valid pdf, including p(x)=1. Many valid pdfs are possible; some will yield lower variance than the constant pdf, others may yield higher variance.
  • An important term in the indirect illumination is \omega_i\cdot n. A pdf proportional to this term typically lowers variance, even though it doesn’t take into account the other terms, or G.
  • The union / sum of the integrals of direct and indirect illumination over the hemisphere is the integral of all illumination over the hemisphere.
  • Sampling direct light using a separate ray towards the light source is a form of importance sampling: we skip many directions on the hemisphere. We thus need a pdf. Normalizing it requires the solid angle.

If not, please reread the relevant sections, and if that fails, ask questions.

Multiple Lights

One last thing that was omitted in the description of sampling direct light is how to handle multiple lights. The solution to this problem is simple, but why this works requires some explanation.

If we have n lights in total, we pick a random light, and evaluate it as we did before: the pdf is 1/SA or zero; 1/SA for the set of directions towards this particular light, zero elsewhere. The result we get from sampling the random light is then multiplied by n.

To understand why this works, let’s consider the graph of Equation 3 again:

Figure 5: plot of Equartion 3 of part 1.

Previously, we integrated this numerically by taking a random sample in the domain [-2.5,2.5]. An alternative is to randomly pick one half of the function for each sample. The domain then becomes either [-2.5,0] or [0,2.5], and thus the contribution of the sample is scaled by 2.5, instead of the 5 that we used for the full domain. Multiplying by 2 thus gives us the correct estimate.

This still works if the two subdomains are not equally sized. Suppose we have subdomains [-2.5,1] and [1,2.5]. Now half of our samples are scaled by 3.5, and the other half by 1.5. Multiplying by 2 yields \left (\frac{1}{2} \times 3.5 \times 2\right )+\left (\frac{1}{2} \times 1.5 \times 2\right )=5, which is still correct. Applying this to lights: each light is a subdomain of the full domain, i.e. the set of all lights. Each light may be picked with an equal probability. A selected light now represents all lights, and therefore its contribution is scaled by n.

Importance Sampling Lights

When a scene is illuminated by multiple lights, picking a random light is not always the best option. A small, distant light will have less effect on a surface point than a nearby bright light source.

We can again use importance sampling to improve our estimate in this situation, by assigning a picking probability to each light source. Since this is again a pdf, the sum of these probabilities must sum to 1. To do so, we first estimate the potential contribution for each light source, I\times SA, where I is the emission of the light source per unit area, and SA, as before, the solid angle. An important factor that is missing here is the visibility of the light source. Visibility is estimated using a ray, which is precisely the operation we wish to prevent.

Suppose we get the following potential contributions for four light sources:

light 1: 0.01
light 2: 5.02
light 3: 2.77
light 4: 0.59

We can normalize these values by dividing each by the sum of the four values (here: 8.39), which yields a valid pdf: it integrates to 1, and it is never zero unless the contribution of a light is 0.

Next, we create a cumulative distribution function (cdf), which stores the partial integrals of the pdf:

\tag{10}F(x)=\int_{a}^{x}p(x)dx

Or, in the case of a discrete pdf:

\tag{11}F(x)=\sum_{x_i<x}f(x_i)

In other words, for a random value x, the cdf tells us what the probability is that sampling according to the pdf produces this value, or a smaller one. For the four light sources, the cdf is simply:

cdf[0] = (0.01) / 8.39                         ≈ 0.1%
cdf[1] = (0.01 + 5.02) / 8.39                  ≈ 60%
cdf[2] = (0.01 + 5.02 + 2.77) / 8.39           ≈ 93%
cdf[3] = (0.01 + 5.02 + 2.77 + 0.59) / 8.39    = 100% 

We can now pick a light with a probability proportional to the pdf, using the cdf:

float r = Rand();
int lightIndex = 0;
while (cdf[lightIndex] < r) lightIndex++;

In other words: we look for the last cdf entry that is smaller than the random number r; the index of this entry is the index of the selected light. Since we are now picking lights using a pdf, we need to divide the result by the value of the pdf. Note that we no longer multiply by the number of light sources: the light count is already accounted for by the pdf. In fact, when we picked each of the four light sources with an equal probability, the value of the constant pdf was 0.25: multiplying by light count is thus the same as dividing by this pdf.

Closing Remarks

This concludes this document. If you have any questions or suggestions, please contact me:

Jacco Bikker : bikker.j@gmail.com Twitter: @j_bikker
Github: https://github.com/jbikker/lighthouse2

Utrecht, December 12th, 2019.

11 comments for “Probability Theory for Physically Based Rendering Part 2

  1. Alex
    December 13, 2019 at 3:35 pm

    The formulation (1) of the rendering equation contains the geometry term G(sx) which you define as the inverse of ||s-x||². Is this used if there is a participating media? Otherwise, the geometry term is 0 or 1?

    • jbikker
      December 13, 2019 at 3:42 pm

      Equation 1 is the ‘three point formulation’; 1/d^2 accounts for the fact that the solid angle of a light source is proportional to 1/d^2. If you would switch to the directional variant, reaching point x from s by selecting a random direction over the hemisphere has a probability proportional to 1/d^2. In the three-point version this needs to be included in the geometry factor; in the other formulations this is accounted for by probabilities.

      • Alex
        December 13, 2019 at 6:50 pm

        Thank you for your response. I see much more frequently the hemispherical formulation version but the original formulation of the rendering equation is the three point formulation.

  2. December 16, 2019 at 10:00 pm

    Flipcode may not be active anymore, but you sure haven’t lost your excellent ability to take fancy academic math heavy topics and boil them down to the kind of pragmatism that a guy like me can understand. Thanks for writing these Jacco, much appreciated (and as we’re about to move our traditional single ray volume raymarcher at work to a physically-ish based path traced kind of setup, this was extremely well-timed and applicable).

    Stroopwafels en een potje bier!

    • jbikker
      December 17, 2019 at 12:46 pm

      Thanks Jaap! Stroopwafels en een potje bier it is. 🙂

  3. Justin
    December 20, 2019 at 11:31 pm

    Can you help me understand the Solid Angle calculation in equation 9? A_light is the surface area of the light, yes? n_light is the surface normal of the light? So is this a very rough approximation of the light’s solid angle?

    Also, how would you calculate the solid angle for a complex light source such as an arbitrary triangular mesh?

    • jbikker
      December 21, 2019 at 1:37 pm

      A_light indeed is the area of the light, and n_light is the surface normal of the light. It is typically not a rough approximation though. Only when the light is large and close the approximation starts to fail. For the typical case where the solid angle of the light is less than a few percent of 2 pi the approximation is accurate enough (and commonly used in renderers).
      The SA for a triangle mesh is typically not calculated; the mesh is broken down in individual triangles which get sampled with some individual probability.

      • Justin
        December 28, 2019 at 10:22 pm

        You say “Only when the light is large and close the approximation starts to fail.” Do you mean that it produces incorrect results? Or simply that it will take more samples to converge?

  4. Justin
    December 28, 2019 at 10:24 pm

    These are fantastic articles. For some reason, other resources have failed to get these concepts to “click” for me, but these did the trick.

    Any chance you want to write a similar article(s) about bidirectional path tracing based on Veach’s thesis? I banged my head against that for two weeks and still don’t understand it.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.