Deepshad
Deepshad
Deepshad
Abstract
We introduce deep shadow maps, a technique that produces fast,
high-quality shadows for primitives such as hair, fur, and smoke.
Unlike traditional shadow maps, which store a single depth at each
pixel, deep shadow maps store a representation of the fractional
visibility through a pixel at all possible depths. Deep shadow maps
have several advantages. First, they are prefiltered, which allows
faster shadow lookups and much smaller memory footprints than
regular shadow maps of similar quality. Second, they support shad-
ows from partially transparent surfaces and volumetric objects such
as fog. Third, they handle important cases of motion blur at no extra
cost. The algorithm is simple to implement and can be added easily Figure 1: Hair rendered with and without self-shadowing.
to existing renderers as an alternative to ordinary shadow maps.
0 z 0 z 0 z
(a) A stack of semitransparent objects (b) Partial coverage by opaque blockers (c) Volume attenuation due to smoke
Figure 3: Visibility functions in flatland. Each diagram shows a beam of light that starts at the shadow camera origin (i.e. the light source) and passes
through a single pixel of the deep shadow map, accompanied by that pixel’s visibility function. (a) The beam’s power is reduced as it passes through
consecutive semitransparent surfaces. (b) The blockers are opaque, but each covers only part of the pixel’s area; the emphasized segments of the function
correspond to visible portions of the blockers. (c) Passage through smoke reduces the beam’s power in a more continuous manner.
i;j = 1 V (1) :
i;j
(c)
A deep shadow map is equivalent to computing the approximate
τv
value of 1 at all depths, and storing the result as a function of
z . In this way each pixel contains the combined attenuation and
coverage information for every depth.
(d)
3.2 Sampling τ
In this section we describe how deep shadow maps can be generated
using the facilities of any standard renderer. Our strategy is simi- Figure 4: Constructing a transmittance function. (a) The object in-
lar to ordinary image sampling. To generate a deep shadow map, tersections along a given ray yield the surface transmittance function
we select a set of sample points across the shadow camera’s image s , which has a discontinuity at the depth of each surface. (b) The ex-
plane (for example, 16 samples per pixel on a jittered grid). For tinction function is obtained by sampling the atmospheric density at
each sample point we determine the corresponding transmittance regular intervals along the ray. (c) The extinction function is integrated
function, which describes the light falloff along a particular pri- and exponentiated to yield the volume transmittance v . (d) The sur-
mary ray. Finally, the visibility function for each pixel is computed face transmittance and volume transmittance are multiplied to obtain
by taking a weighted combination of the transmittance functions at the final transmittance function for each ray.
nearby sample points.
We first describe how to compute a single transmittance func-
tion. Given an image point (x; y ), we compute the surfaces and
volume elements intersected by the corresponding primary ray. The To estimate the volume transmittance, we sample the atmo-
surface intersections can be found using either a ray tracer or an or- spheric density at regular intervals along the ray. Each volume sam-
dinary scan conversion renderer, and we assume that the properties ple has a depth value ziv and an extinction coefficient i that mea-
of volumetric objects can be evaluated at any points desired. The sures the light falloff per unit distance along the ray. We linearly
transmittance function at the point (x; y ) can then be expressed as interpolate between these samples to yield the extinction function
the product of a surface transmittance function s and a volume (see Figure 4b). The fraction of light that penetrates to a given
transmittance function v , as described below. depth z is then given by the formula3
Surface transmittance is estimated using all of the surface in-
(z ) = exp(
Rz 0 0
tersections along the primary ray at (x; y ). Each surface hit has a 0 (z ) dz ) :
v
depth value zis and an opacity Oi . These are composited in the usual
way, starting with a transparency of 1 and multiplying by 1 Oi Since this function is not piecewise linear, we approximate it by
at each surface hit, to yield a piecewise constant function s (see evaluating the transmittance at each vertex of the extinction func-
Figure 4a). Notice that each surface hit generates two vertices with tion and linearly interpolating. We do this by computing the trans-
the same z value, in order to represent the discontinuous steps as a mittance of each linear segment as
piecewise linear curve.2 The “vertices” at z = 0 and z = are 1 T = exp( (z +1 z )( +1 + )=2)
i
v
i
v
i i i
represented implicitly and are not part of the output.
2 The wasted
and compositing as we did for the surface transparencies, except
space in this representation is removed during the compres- that we interpolate between vertices rather than forcing discrete
sion phase, which approximates the average of many discrete steps with
a single linear segment. A true “step” in the compressed output function 3 If
the shadow camera is not orthographic, a correction factor is needed
would occur only for a surface exactly parallel to the xy -plane. for each ray to account for the relationship between depth and distance.
steps. This yields the volume transmittance function v (see Fig- 1 (a)
ure 4c). We then merge the surface and volume components by
multiplying them:
(z ) = (z ) (z )
s v
(see Figure 4d). Since this function is again not piecewise linear, 0 z
we evaluate it at the combined vertices of s and v and interpolate
linearly between them.
Finally, we describe how the transmittance functions are com- 1 (b)
bined to yield the visibility function Vi;j for an entire pixel. At
each depth z the nearby transmittance functions are filtered just like
ordinary image samples:
n 0 z
X
V (z ) =
i;j w (z ) ;
k k (1)
k =1 1 (c)
3.3 Compression
0 z
Visibility functions sampled in this way may have a large number of
vertices, depending on the filter radius and the number of samples
per shadow pixel. Fortunately these functions are generally quite
Figure 5: Our compression algorithm. (a) A piecewise linear curve
smooth, making them easily compressed. The compressed func-
and an illustration of its error bound. (b) Each input vertex defines a
tions are stored as an array of floating-point pairs, each containing target window that constrains the slope of the next output segment. (c)
a z value and a fractional visibility V . The current slope range is intersected with each target window until it
It is very important that the compression method preserve the z would become empty. (d) The output segment is extended to the current
values of important features, since even small errors in z can lead z value with a slope equal to the midpoint of the current slope range,
to undesirable self-shadowing artifacts. The method must also be
2 1
and this process is repeated.
appropriate for the unbounded domain z [0; ). These facts
preclude the use of compression methods based on a fixed or hier-
archical set of basis functions (e.g. a wavelet basis). It also implies
that the L1 and L2 error metrics are unsuitable, since visibility er- with each target window in succession until further progress would
rors are important even if they occur over a very small range of z make it empty (see Figure 5c). We then output the line segment
values. Instead we use the L1 error metric (maximum error), and with slope (mlo + mhi )=2 terminating at the z value of the last
compress functions using the simple greedy algorithm described control point visited. The endpoint of this segment becomes the
below. origin of the next segment, and the entire process is repeated. Note
Given a visibility function V and an error tolerance (see Fig-
ure 5a), our algorithm outputs a new visibility function V 0 such that
that the midpoint slope rule attempts to center each segment within
the allowable error bounds.
V 0 (z ) V (z ) This algorithm is fast, simple to implement, and requires con-
for all z stant storage. Slightly better approximations could be obtained by
doing a least-squares fit once the z values of the control points have
and where V 0 typically has a much smaller number of control points been chosen. However, the basic algorithm satisfies the given error
(Figure 5d). The main feature of the algorithm is that it is incremen- criteria and generates very good approximations in practice.
tal: It reads and writes control points one at a time in increasing z
order, and requires only a constant amount of state information.
The basic idea is that at each step, the algorithm draws the 3.4 Lookups
longest possible line segment that stays within the error bounds Like textures, deep shadows are accessed by applying a reconstruc-
(similar to hitting a ball through as many wickets as possible in tion and resampling filter to a rectangular array of pixel values. In
a game of croquet). The origin of the current segment is fixed, and our case the pixel values are obtained by evaluating the visibility
we only need to choose the direction and length of the segment. To functions at a constant depth z . Given a point (x; y; z ) at which
simplify the implementation, we restrict the output z values to be a to perform the lookup and a two-dimensional filter kernel f , the
subset of the input z values.
Let the origin of the current output segment be (zi0 ; Vi0 ). At every
filtered shadow value is given by
step we maintain the range of permissible slopes [mlo ; mhi ] for the P
w V (z )
segment. Each new control point (zj ; Vj ) of the input function V V (x; y; z ) = i;j i;j i;j
P
imposes a constraint on the current slope range by forcing the seg- w
i;j i;j
ment to pass through the target window defined by the wedge from
the segment origin to the two points (zj ; Vj ) (see Figure 5b). where wi;j = f (i + 21 x; j + 12 y ) is the filter weight for pixel
The current slope range is initialized to [ 11
; ], and is intersected (i; j ), and the sum is over all pixels within the filter radius.
Evaluating each visibility function requires a search through its The compression ratios are even better when the visibility func-
data points to determine which segment contains the given z value. tions are asymptotically piecewise smooth (which is the usual case).
This can be done using a linear or binary search, depending on the A simple Taylor series argument shows that in this case the com-
number of data points. In our implementation, we take advantage pression error decreases quadratically in the number of output ver-
of the fact that many shadow lookups are often done at nearby z tices, so that functions can be compressed with a tolerance of
values by storing a pointer with each pixel to the most recently ac- O(N 1=2 ) using only O(N 1=4 ) vertices. Thus deep shadow maps
cessed segment. On each lookup we search linearly either forward are typically smaller than their regular counterparts by at least a fac-
or backward from this position in order to reduce the average cost tor of (N 3=4 ). This is a substantial advantage when many samples
of visibility function evaluations. per pixel are used (say N = 250).
The main disadvantage of deep shadow maps is that they are sig-
nificantly more expensive to compute than a regular shadow map of
4 Discussion the same pixel resolution (because many more samples per pixel are
taken). This contradicts the conventional assumption that shadow
One of the main advantages of deep shadows over regular shadow map generation is cheap [9]. On the other hand, they are typi-
maps is that they support prefiltering. Each deep shadow map pixel cally no more expensive to compute than a shadow map with the
summarizes many individual depth samples in such a way that eval- same number of depth samples, and they are considerably cheaper
uating the visibility function at a given z value is equivalent to per- to store and access. Note that generating more depth samples is rel-
centage closer filtering all of the depth samples within the pixel’s atively inexpensive in a scanline renderer, since it does not increase
filter radius (to within the tolerance used for compression). Pre- the shading cost.
filtering is important because accurate shadows require large num- Another potential issue with deep shadows is bias. Because fil-
bers of depth samples, as we saw in Section 2. This is equally true tering is performed at a constant z depth, large objects may suffer
for both ordinary shadow maps and deep shadow maps. from incorrect self-shadowing. Although this artifact also occurs
with normal shadow maps, deep shadows exacerbate the problem
Although deep shadows do not reduce the number of depth sam-
because they encourage the use of large filter widths.
ples that must be taken from the scene, they greatly reduce the
amount of data that must be accessed during filtering. For example, However, it is important to realize that the bias problems are no
recall that in order to compute a shadow of dense hair with an ex- worse than they would be for an ordinary shadow map at the same
pected error of 1%, approximately N = 2500 samples are needed. filter width. The main limitation of deep shadow maps compared
Using a deep shadow map with 250 samples per pixel, we would to high-resolution ordinary shadow maps is that the minimum filter
need to filter only N = 10 pixels to achieve the required accuracy.
width is larger (because each deep shadow pixel summarizes many
depth samples). However, this can be considered an advantage: It
Furthermore deep shadows can be mip-mapped (see Section 5.3),
provides deep shadows with an extra degree of freedom to control
and thus accurate shadows require only a constant number of pixel
the tradeoff between shadow detail and noise. The deep shadow
accesses even when filter widths are very large.
resolution should be chosen according to the minimum filter width
Prefiltering not only makes shadow lookups faster, but also al- desired (i.e. shadow detail), while the number of samples per pixel
lows deep shadows to be much smaller than the equivalent high- should be determined by the maximum acceptable noise. This strat-
resolution depth map. This is an advantage when deep shadow egy allows the depth samples that are required only for accuracy
maps are written, stored, and cached in memory. Note that the ad- purposes to be represented very compactly, with bias problems that
vantages of prefiltering are completely dependent on compression. are no worse than they would be for a regular shadow map of the
If we did not compress the visibility functions at all, then each pixel same pixel resolution.
would contain the data from all the underlying samples and would
not be any smaller.
Fortunately, at any sampling rate there is an error tolerance 5 Implementation Issues
that allows significant compression without compromising shadow
quality. Specifically, recall from Section 2 that shadows of detailed 5.1 Incremental Updates
geometry have an expected error of O(N 1=2 ), where N is the
number of samples per deep shadow pixel. This error is a measure Recall that each visibility function is defined as the weighted av-
of the noise inherent in the sampled visibility function. Since there erage of n piecewise linear transmittance functions, according to
is no point in preserving noise, this suggests that O(N 1=2 ) is a equation (1). The naı̈ve way to generate this function is to sort all
of the input vertices and process them in z order, evaluating the n
p
suitable tolerance for compression. The tolerance we actually use
is 0:25= N , which is about half of the maximum expected noise
p
contributing functions at each vertex. Unfortunately this approach
has O(n2 ) complexity: there are O(n) input vertices, and O(n)
magnitude of 0:5= N .4 work is needed to compute the weighted average at each one. This
Using this tolerance, we can show that deep shadows are asymp- is quite inefficient when large numbers of samples per pixel are
totically much smaller than regular shadow maps. Since each visi- used.
bility function decreases monotonically from 1 to 0 (assuming that Instead, we describe an O(n log n) sweep algorithm that has a
the filter function has no negative lobes), and the function decreases constant update cost per vertex. The algorithm is easier to under-
by at least the compression tolerance at each vertex, the compressed stand if we first suppose that the transmittance functions are piece-
function can have at most O(N 1=2 ) vertices. The corresponding wise constant. In this case, we can efficiently compute the output
regular shadow map uses O(N ) storage for the depth values in each function as follows. At z = 0, the weighted average is easily com-
pixel, and so the deep shadow map must be smaller by at least a fac- puted as V (0) = 1. We then process all of the input vertices in
tor of (N 1=2 ). increasing z order, which can be done in O(log n) time per vertex
by storing the next vertex of each transmittance function in a heap.
4 Recall that the expected error is only O (N 3=4 ) when the shadow For every vertex, we update the current sum V by simply subtract-
geometry is simple, which suggests that a compression tolerance of ing out this transmittance function’s old contribution and adding in
O(N 1=2 ) is too large in this case. But since N must be chosen large its new contribution. That is,
V 0 = V + w ( 0
enough to control the noise artifacts in the worst-case pixels, there is little
benefit to compressing more accurately than this. j j )
j
where wj is the filter weight for the chosen transmittance function, a table indicating the starting position and size of every pixel. Tile
j is the old value of this function and j0 is its new value. caching is handled by providing a fixed number of tile slots; when
This method can be extended to piecewise linear functions by a tile fault occurs, the slot chosen for replacement is resized if nec-
using a similar technique to keep track of the output function’s cur- essary to hold the incoming tile. In this way, each tile slot grows
rent value and slope. The update for each vertex consists of two to accommodate the largest tile it has seen. (Optionally the tile
steps: First, we extend the output function (using its current posi- slots could also be resized when the incoming tile is significantly
tion and slope) to the z value of the next input vertex, and then we smaller.)
update the current output slope using the method described above.
(Vertical steps are handled as a special case, by updating the current
5.5 Motion Blur
position rather than the current slope.)
This technique is much more efficient than computing the It is easy to support motion blur in deep shadow maps by simply
weighted averages directly, and makes the algorithm practical even associating a random time with every shadow image sample (and
when very large numbers of samples per pixel are used. Note that its corresponding transmittance function). When these samples are
all of the transmittance functions do not need to be stored simul- averaged together into visibility functions, they account for the av-
taneously as the deep shadow map is rendered; it is sufficient to erage coverage over time as well as over the image plane.
store only the previous several scanlines, as determined by the filter While motion blur is also possible with ordinary shadow maps,
radius. it is very expensive because of the large filter widths needed for
adequate anti-aliasing. Deep shadow maps do much of this filtering
5.2 Colored Shadows in advance, and thus reduce the number of pixels that need to be
accessed for each lookup.
Colored shadows are supported by simply encoding a different vis- Motion-blurred shadows produced in this way are strictly cor-
ibility function for each color channel (one each for red, green, and rect only when the receiving object is stationary with respect to the
blue). The compression algorithm processes all the channels simul- shadow camera. In particular, moving objects cast incorrect shad-
taneously, and starts a new segment whenever any of the three func- ows onto other moving objects (and onto themselves). This happens
tions would exceed its error threshold. The output is a sequence of because the deep shadow map effectively blurs an object’s shadow
tuples (z; VR ; VG ; VB ). Notice that with this representation, three- over the entire shutter interval, allowing one object to cast shadows
channel maps are only twice as large as one-channel maps. onto other objects at a different times. However, even the ability to
To reduce storage even further, our format allows each pixel cast shadows onto stationary objects is useful, and the results are
to encode either one or three visibility functions depending on its often acceptable even when the receiving object is moving.
needs. If all three channels happen to be the same, we store only
one channel and set a flag indicating that this pixel is monochrome.
6 Results
5.3 Mip-mapping We have implemented deep shadow maps in a highly optimized
scanline renderer that also supports traditional shadow maps. We
Since deep shadow maps are filtered like textures, it is straight-
present experiments comparing these two techniques in terms of
forward to apply mip-mapping [12]. Starting with the highest-
time and storage space. We also illustrate two additional capabili-
resolution deep shadow map, each new mip-map level is obtained
ties of deep shadows: volumetric shadows and inexpensive motion
by averaging and downsampling the previous level by a factor of
blur.
two. Each pixel is defined by taking the average of four visibility
Figure 6a shows a ball covered with 50,000 hairs. The individual
functions, and recompressing the result.
hairs are significantly narrower than a pixel, and combine to form
To avoid the accumulation of too much error, the compression
tufts and curls of various densities. The scene is illuminated by
tolerance can be reduced on each successive level. For exam-
three spotlights, each of which casts shadows.
ple, if the error threshold is cut if half each time, the total error
We have rendered this scene under various conditions to com-
will be at most twice that permitted for the highest-resolution map.
pare the performance of deep shadow maps with traditional shadow
This method corresponds to the analysis of Section 4, which sug-
gests that the compression tolerance should be O(N 1=2 ) in the
maps. Figure 6b shows a magnified view of the shadow cast by the
number of contributing depth samples N . The storage per sample
hairball, rendered using a 512 512 normal shadow map. Shadow
filtering was done using 16 samples per lookup on a jittered grid;
increases somewhat at coarser levels, but since the functions be- using more samples does not increase the shadow quality due to the
ing compressed are asymptotically piecewise smooth, their com-
pressed size is O(N 1=4 ) (see Section 2). This implies that the
coarse resolution of the underlying shadow map. This image has
obvious noise artifacts that would be unacceptable if animated. In
number of vertices per visibility function doubles once for every order to eliminate these artifacts, Figure 6c was rendered using a
p
two mip-map levels, and that the full mip-map is approximately
4k 4k shadow map with 400 samples per lookup for percentage
1=(1 2=4) 1:55 times as large as the base level (rather than closer filtering. The noise artifacts are much improved, but shadow
the 4=3 ratio for an ordinary mip-map). filtering times were much longer (559 seconds vs. 19 seconds). Fig-
Mip-mapped deep shadows are filtered just like mip-mapped tex-
ure 6d was rendered using a 512 512 deep shadow map with 256
tures, including anisotropic filters, lerping between levels, etc. [1]. samples per pixel (the same number of depth samples as the pre-
vious case). Even though the deep shadows were much faster (37
seconds) and required only one-sixth the storage space, the shadow
5.4 Tiling and Caching
quality is actually slightly better than the 4k 4k shadow map im-
We store deep shadow maps in a tiled format, similar to those for age (because deep shadows consider every depth sample rather than
textures [8]. This lets us load and cache only the deep shadow map randomly selecting a subset of them for filtering).
tiles that are actually accessed. Figure 7 summarizes the results of similar tests at various max-
One important difference is that unlike textures, deep shadow imum noise levels. Each row compares a deep shadow map to a
map tiles and pixels require varying amounts of storage in memory. normal shadow map with the same number of depth samples; in the
To deal with this, our file format includes a tile directory that speci- deep shadow this was achieved by holding the resolution constant at
fies the starting offset and size of every tile. Similarly, each tile has
256 256 and adjusting the number of samples per pixel (as shown
(a) Ball with 50,000 hairs (b) 512 512 Normal shadow map (c) 4k 4k Normal shadow map (d) 512 512 Deep shadow map
Figure 6: Hair ball and magnified shadows demonstrating noise from various methods.
Figure 7: Comparison of deep shadows and normal shadows for the p 1000 2000
scene in Figure 6. The error column shows the expected error 0:5= N
associated with the given sampling density.
Figure 8: Theoretical and observed growth rates of deep shadow maps.
(The theoretical growth rate of O (N 1=4 ) is taken from Section 4.)
1000
the same accuracy as the deep shadow map at its minimum filter
p
size. The second column shows the corresponding worst-case ex- normal
pected error level of 0:5= N ; the deep shadows were compressed 500 deep
using an error tolerance of half this amount. All tests used the same
absolute filter width, equal to the pixel size of a 256 256 shadow
map. The shadow evaluation times were measured by rendering an 0
0.02 0.04 0.06
image using each technique and subtracting the time for an image filter width
without shadows.
Observe that normal shadow maps become much more expen- Figure 9: Filtering time as a function of filter width (expressed as a
sive as the error threshold decreases, while the deep shadow map fraction of the shadow map size). While both methods access a constant
filtering times are virtually constant. Normal shadow maps are only number of pixels per lookup, larger filter widths result in worse cache
faster at 4 4 and fewer samples, sampling rates that are much too coherence for normal shadow maps and slightly better cache coherence
low for use in animations. In our implementation deep shadows for deep shadow maps.
have not been as extensively optimized as normal shadow maps, and
it is likely that further speed improvements will be found. Shadow
generation time was similar for both methods and is not shown; the
compression overhead for deep shadows was negligible. for the normal shadow map grows rapidly with the amount of blur,
Notice that deep shadow maps grow very slowly in size as the er- even though the number of filter samples was held constant at 8 8.
ror threshold decreases. The deep shadow sizes do not include mip- This can be attributed to worse cache performance as the filter re-
map overhead, which would make them approximately 1.5 times gion spans larger and larger portions of the shadow map. With the
larger (see Section 5.3). Figure 8 plots the growth of the deep deep shadow map, on the other hand, mip-mapping allows the filter-
shadow map size with respect to the number of depth samples per ing times to be virtually constant. (In theory the cache performance
pixel. About 40% of the pixels in this deep shadow map contain of deep shadows is actually better at very large filter widths, since
hair, while the rest contain simple geometry. Notice that the aver- the lower-resolution mip-map levels contain fewer pixels.)
age number of vertices per compressed visibility function closely
matches the O(N 1=4 ) behavior predicted in Section 4.
Figure 10 illustrates the importance of self-shadowing to the ap-
pearance of volumetric objects, while Figure 11 demonstrates that
Figure 9 shows how both methods perform when a fixed accu- a single deep shadow map can be used for both volumetric and sur-
racy is desired, but progressively larger filter widths are applied. In face shadows. Finally, Figure 12 demonstrates the artifacts that oc-
this case the number of filter samples for normal shadow maps can cur when shadows are not motion blurred; this effect appears as
be held fixed. A single shadow map of each type was rendered to strobing when animated. Unlike normal shadow maps, deep shad-
support the smallest desired filter size, and the same shadow maps ows allow motion blur to be added without incurring an extra filter-
were used to render progressively larger blurs. The filtering time ing cost.
Figure 12: Rapidly moving sphere with and without motion blur.
References
[1] Paul S. Heckbert. Survey of texture mapping. IEEE Computer
Graphics and Applications, 6(11):56–67, November 1986.
[2] James T. Kajiya and Timothy L. Kay. Rendering fur with three
dimensional textures. In Computer Graphics (SIGGRAPH ’89
Proceedings), volume 23, pages 271–280, July 1989.
[3] James T. Kajiya and Brian P. Von Herzen. Ray tracing volume
densities. In Computer Graphics (SIGGRAPH ’84 Proceed-
ings), volume 18, pages 165–174, July 1984.
[4] Brett Keating and Nelson Max. Shadow penumbras for com-
Figure 11: A cloud with pipes. Notice the shadows cast from surfaces
plex objects by depth-dependent filtering of multi-layer depth
onto volumetric objects and vice versa. A single deep shadow map
images. In Eurographics Rendering Workshop 1999, pages
contains the shadow information for the cloud as well as the pipes.
205–220, New York, June 1999. Springer-Verlag.
[5] Marc Levoy. Display of Surfaces from Volume Data. Ph.D.
thesis, University of North Carolina at Chapel Hill, 1989.
7 Conclusion and Future Work [6] Nelson Max. Hierarchical rendering of trees from precom-
puted multi-layer Z-buffers. In Eurographics Rendering
A nice feature of deep shadow maps is their generality: they support Workshop 1996, pages 165–174, New York, June 1996.
ordinary surfaces, volumetric objects, dense fur, and even motion
blur, effects that would normally be handled using different tech- [7] Don P. Mitchell. Consequences of stratified sampling in
niques. With deep shadows they can all be combined in a single graphics. In SIGGRAPH 96 Proceedings, pages 277–280. Ad-
compact data structure, and rendered efficiently under a wide range dison Wesley, August 1996.
of viewing and filtering conditions. [8] Darwyn R. Peachey. Texture on demand. Unpublished
Carrying this idea further, deep shadow maps are an attractive manuscript, 1990.
representation for arbitrary volumetric data: fog densities, approx-
imate illumination information, and so on. In this context, a deep [9] William T. Reeves, David H. Salesin, and Robert L. Cook.
shadow map can be viewed as a two-dimensional function image of Rendering antialiased shadows with depth maps. In Computer
piecewise linear one-dimensional functions. This representation is Graphics (SIGGRAPH ’87 Proceedings), volume 21, pages
sparse in z , which allows it to take advantage of any smoothness 283–291, July 1987.
in the raw data, while it is discrete in x and y (and allows binary
search in z ) in order to facilitate fast lookups. The domain can
[10] Jonathan W. Shade, Steven J. Gortler, Li-wei He, and Richard
Szeliski. Layered depth images. In SIGGRAPH 98 Proceed-
easily be mapped to a frustum or an orthographic space, and has
ings, pages 231–242. Addison Wesley, July 1998.
the advantage of having an arbitrary extent and resolution in one of
its dimensions. Three-dimensional filtering would be an easy ex- [11] Lance Williams. Casting curved shadows on curved sur-
tension, since the adjacent data values in z can be found without faces. Computer Graphics (SIGGRAPH ’78 Proceedings),
searching and the representation is sparse in this direction. 12(3):270–274, August 1978.
Although the function image representation may not achieve
quite as much compression as a three-dimensional octree or wavelet [12] Lance Williams. Pyramidal parametrics. Computer Graphics
expansion, and it is not quite as fast to index as a three-dimensional (SIGGRAPH ’83 Proceedings), 17(3):1–11, July 1983.
grid, it is an excellent compromise that retains most of the benefits [13] Andrew Woo, Pierre Poulin, and Alain Fournier. A survey of
of both of these extremes. We expect that deep shadow maps and shadow algorithms. IEEE Computer Graphics and Applica-
their underlying representation will find other interesting applica- tions, 10(6):13–32, November 1990.
tions in the future.