Lopez-Moreno CEIG09
Lopez-Moreno CEIG09
Lopez-Moreno CEIG09
9-11 (2009)
C. Andújar and J. LLuch (Editors)
Abstract
Common tasks related to image processing or augmented reality include rendering new objects into existing ima-
ges, or matching objects with unknown illumination. To facilitate such algorithms, it is often necessary to infer
from which directions a scene was illuminated, even if only a photograph is available. For this purpose, we present
a novel light source detection algorithm that, contrary to the current state-of-the-art, is able to detect multiple light
sources with sufficient accuracy. 3D measures are not required, only the input image and a very small amount of
unskilled user interaction.
Categories and Subject Descriptors (according to ACM CCS): Computing Methodologies [I.3.7]: Computer
Graphics—3D Graphics; Computing Methodologies [I.4.10]: Image Processing and Computer Vision—Image
Representation
c The Eurographics Association 2009.
Jorge Lopez-Moreno & Sunil Hadap & Erik Reinhard & Diego Gutierrez / Light source detection in photographs
c The Eurographics Association 2009.
Jorge Lopez-Moreno & Sunil Hadap & Erik Reinhard & Diego Gutierrez / Light source detection in photographs
is aligned with the image plane and the axis is aligned with Algorithm 1 Contour Voting - N lights
x axis. Thus the direction of a light is uniquely identified by Require: Lv ≡ {Liv } {discrete luminances}
the azimuth angle φ and the zenith angle θ . Require: n ≡ {ni } {silhouette normals}
Require: φ n ≡ {φin } {azimuth coordinates of the nor-
3.2. Estimating Azimuth Angles mals}
1: sort( Lv , n, φ n ) {sort by decreasing luminances}
We assume that the normals at the silhouette lie in the image 2: φ l ≡ {φ lj } | j ∈ [1 · · · N] {azimuth coordinates of the
plane. We further assume that there are N discrete lights, lights}
each being either a directional light or a far away point light 3: seed( φ l )
(we estimate N below). Thus each light is uniquely charac- 4: α ⊕ ≡ {α ⊕ j } | j ∈ [1 · · · N] {aggregate of weights per
terized by its unknown luminance L j and unknown unit di- light}
rection ω j , j = 1 · · · N. To analyze the intensity variation of 5: α ⊕ ← 0
the silhouette pixels, we assume a nominal Lambertian sur- 6: repeat
face. Consider all pixels {pi } that belong to the silhouette. 7: for all Liv ∈ Lv do
Let ni be the normal and Liv be the known luminance of the 8: ω j ← [0, sin(φ lj ), cos(φ lj )]T {current direction}
object at point pi :
9: Ω⊕ i ← ∑ j Ω(ni , ω j ) {total occlusion weight}
N 10: for all j ∈ [1 · · · N] do
Liv = ∑ Ωi j L j (1) 11: ω j ← [0, sin(φ lj ), cos(φ lj )]T {current direction}
j=1
(
0 if ni ·ω j < 0, 12: αi j ← Liv Ω(ni , ω j )/Ω⊕
i {weight of normal i}
Ωi j = Ω(ni , ω j ) =
Kid ni ·ω j if ni ·ω j ≥ 0 13: φ lj ← α ⊕ φ
j j
l + α φ n {update direction}
ij i
c The Eurographics Association 2009.
Jorge Lopez-Moreno & Sunil Hadap & Erik Reinhard & Diego Gutierrez / Light source detection in photographs
a b a b
c The Eurographics Association 2009.
Jorge Lopez-Moreno & Sunil Hadap & Erik Reinhard & Diego Gutierrez / Light source detection in photographs
we scan the light direction from multiple silhouette points. errors usually below 20 degrees for the more restrictive az-
One way to realize this scheme is to rotate the image such imuth angle θ . This error range is discussed in the next sec-
that the light direction ω(φ lj ) is aligned with the y-axis and tion. Even for the zenith angle φ , only the second light in
the light on the left, see Figure 3. Then we simply scan each the Quilt scene returned a larger error due to the bouncing of
raster line i, starting from the silhouette boundary on left into that light in the surface on the left. Table 1 shows all the data
the interior. We detect the set of points {phi lo
i j } or {pi j }, cor- for the input images shown in Figure 4: for each light source
v
responding zenith angles {θi j } and luminances Li j . Thus for present in the scene, we show the real measured locations
the light j, the resultant zenith angle is the weighted sum: of the light sources, the results output by our algorithm and
the corresponding absolute error. The light probe used in the
∑i Livj θi j
θj = (2) first three images is the apple; for the other two, we used the
∑i Livj head of the guitar player and the Scottish quilt.
R
A 5.71 35.31 162.25 −64.03 − −
3.4. Ambient Illumination E 20.71 4.69 2.75 24.03 − −
R 90.00 −70.00 − − − −
The shading contribution of the ambient light is assumed to
A 94.54 −65.70 − − − −
be constant for all pixels and we can therefore estimate its E 4.54 4.3 − − − −
intensity by analyzing pixels in the shadow regions. We al- R 180.00 0.00 0.00 0.00 − −
ready have detected the shadow lines in the previous step. A 168.50 14.48 0.0 11.31 − −
The region bounded by these shadow lines is determined to E 12.50 14.48 0.00 11.31 − −
be a shadow region. We average the set of samples along R 180.00 10.00 30.00 −45.00 260.00 45.00
these boundaries. This ambient intensity estimate is also rel- A 185.71 29.66 25.64 −49.19 272.29 41.48
ative to the previously detected lights. E 5.71 19.66 4.36 4.19 12.29 3.16
R 10.00 −35.00 120.00 −10.00 − −
Quilt
We have tested our algorithm on several images with con- Tabla 1: Real measured light directions (R), value returned
trolled (known) light configurations, in order to measure by our algorithm (A) and absolute error (E) for the zenith φ
the errors of our light detection. The images include var- and azimuth θ angles in the scenes depicted in Figure 4.
ied configurations (see Figure 4): Apple1, Apple2 and Ap-
ple3 show a relatively simple geometry under very different
lighting schemes (with one or two light sources, plus am- We have further tested our algorithm on uncontrolled im-
bient light). The Guitar and Quilt images show much more ages, depicting scenes with unknown illuminations and vary-
complex scenes lit by three and two light sources respec- ing degrees of diffuse-directional lighting ratios. Given that
tively. The light directions returned by our algorithm show we obviously cannot provide error measures in those cases,
c The Eurographics Association 2009.
Jorge Lopez-Moreno & Sunil Hadap & Erik Reinhard & Diego Gutierrez / Light source detection in photographs
c The Eurographics Association 2009.
Jorge Lopez-Moreno & Sunil Hadap & Erik Reinhard & Diego Gutierrez / Light source detection in photographs
cations are gaining popularity due to, among other factors, [LR89] L EE C., ROSENFELD A.: Improved methods of estimated
the current existence of huge databases and their accessibil- shape from shading using the light source coordinate system. In
ity through the internet. Some examples include Photo Clip Shape from shading, Horn B., Brooks M., (Eds.). MIT Press,
1989, pp. 323–569.
Art [LHE∗ 07], Interactive Digital Photomontage [ADA∗ 04]
or Photo Tourism [SSS06]. [MG97] M ARSCHNER S. R., G REENBERG D. P.: Inverse light-
ing for photography. In Fifth IST/SID Color Imaging Conference
We assume global convexity for the chosen de-facto light (1997), pp. 262–265.
probes in the images. Whilst this is true for most objects, [NE01] N ILLIUS P., E KLUNDH J.-O.: Automatic estimation of
the algorithm will return wrong values if a concave object is the projected light source direction. In CVPR (2001), pp. I:1076–
chosen instead. Our algorithm will also fail in the presence 1083.
of purely reflective or transparent (refractive) objects chosen [NN04] N ISHINO K., NAYAR S. K.: Eyes for relighting. ACM
as light probes, which break our assumption about shading. Transactions on Graphics (Proceedings of ACM SIGGRAPH) 23,
3 (2004), 704–711.
In these cases, an approach similar to [NN04] may be more
suitable, although previous knowledge about the geometry [OCDD01] O H B. M., C HEN M., D ORSEY J., D URAND F.:
Image-based modeling and photo editing. In SIGGRAPH ’01:
of the objects in the image would be needed. As future work, Proceedings of the 28th annual conference on Computer Graph-
we would like to address these cases. ics and Interactive Techniques (2001), 433-442, (Ed.).
[OCS05] O STROVSKY Y., C AVANAGH P., S INHA P.: Perceiv-
6. Acknowledgements ing illumination inconsistencies in scenes. Perception 34 (2005),
1301–1314.
This research was partially funded by a generous gift from [Pen82] P ENTLAND A.: Finding the illuminant direction. Journal
Adobe Systems Inc and and the Spanish Ministry of Science of the Optical Society of America A 72, 4 (1982), 448–455.
and Technology (TIN2007-63025). [PSG01] P OWELL M., S ARKAR S., G OLDGOF D.: A simple
strategy for calibrating the geometry of light sources. IEEE
Transactions on Pattern Analysis and Machine Intelligence 23,
References 9 (2001), 1022–1027.
[ADA∗ 04] AGARWALA A., D ONTCHEVA M., AGRAWALA M., [SSI99] S ATO I., S ATO Y., I KEUCHI K.: Illumination distribution
D RUCKER S., C OLBURN A., C URLESS B., S ALESIN D., C O - from brightness in shadows: Adaptive estimation of illumination
HEN M.: Interactive digital photomontage. ACM Transactions distribution with unknown reflectance properties in shadow re-
on Graphics 23, 3 (2004), 294–302. gions. In ICCV (2) (1999), pp. 875–882.
[BH85] B ROOKS M., H ORN B.: Shape and source from shading. [SSS06] S NAVELY N., S EITZ S. M., S ZELISKI R.: Photo
In Proc. Int. Joint Conf. Artificial Intell. (1985), pp. 932–936. tourism: Exploring photo collections in 3d. ACM Transactions
[GHH01] G IBSON S., H OWARD T., H UBBOLD R.: Flexi- on Graphics 25, 3 (2006), 835–846.
ble image-based photometric reconstruction using virtual light [VY94] V EGA E., YANG Y.-H.: Default shape theory: with the
sources. Computer Graphics Forum 19, 3 (2001), C203–C214. application to the computation of the direction of the light source.
[HA93] H OUGEN D., A HUJA N.: Estimation of the light source Journal of the Optical Society of America A 60 (1994), 285–299.
distribution and its use in integrated shape recovery from stereo [VZ04] VARMA M., Z ISSERMAN A.: Estimating illumination di-
shading. In ICCV (1993), pp. 29–34. rection from textured images. In Proceedings of the IEEE Con-
[Hor86] H ORN B.: Robot Vision. McGraw-Hill, 1986. ference on Computer Vision and Pattern Recognition, Washing-
[KP03] KOENDERINK J. J., P ONT S. C.: Irradiation direction ton, DC (June 2004), vol. 1, pp. 179–186.
from texture. Journal of the Optical Society of America 20, 10 [WAC07] WANG J., AGRAWALA M., C OHEN M.: Soft scissors:
(2003), 1875Ű–1882. An interactive tool for realtime high quality matting. ACM Trans-
[KRFB06] K HAN E. A., R EINHARD E., F LEMING R., actions on Graphics 26, 3 (2007), 9.
B ÜLTHOFF H.: Image-based material editing. ACM Transac- [WS02] WANG Y., S AMARAS D.: Estimation of multiple illu-
tions on Graphics (SIGGRAPH 2006) 25, 3 (2006), 654–663. minants from a single image of arbritary known geometry. In
[KvDP04] KOENDERINK J. J., VAN D OORN A. J., P ONT S. C.: ECCV02 (2002), vol. 3, pp. 272–288.
Light direction from shad(ow)ed random gaussian surfaces. Per- [YY91] YANG Y., Y UILLE A.: Sources from shading. In Com-
ception 33, 12 (2004), 1405–1420. puter Vision and Pattern Recognition (1991), pp. 534–539.
[LB01] L ANGER M. S., B ÜLTHOFF H. H.: A prior for global [ZK02] Z HOU W., K AMBHAMETTU C.: Estimation of illumi-
convexity in local shape-from-shading. Perception 30 (2001), nant direction and intensity of multiple light sources. In ECCV
403–410. ’02: Proceedings of the 7th European Conference on Computer
[LF06] L AGGER P., F UA P.: Using specularities to recover mul- Vision-Part IV (London, UK, 2002), Springer-Verlag, pp. 206–
tiple light sources in the presence of texture. In ICPR ’06: Pro- 220.
ceedings of the 18th International Conference on Pattern Recog- [ZTCS99] Z HANG R., T SAI P., C RYER J., S HAH M.: Shape
nition (Washington, DC, USA, 2006), IEEE Computer Society, from shading: A survey. IEEE Transactions on Pattern Analy-
pp. 587–590. sis and Machine Intelligence 28, 8 (1999), 690–706.
[LHE∗ 07] L ALONDE J.-F., H OIEM D., E FROS A. A., [ZY01] Z HANG Y., YANG Y.-H.: Multiple illuminant direction
ROTHER C., W INN J., C RIMINISI A.: Photo clip detection with application to image synthesis. IEEE Trans. Pat-
art. ACM Transactions on Graphics (SIGGRAPH 2007) tern Anal. Mach. Intell. 23, 8 (2001), 915–920.
26, 3 (August 2007). See the project webpage at
http://graphics.cs.cmu.edu/projects/photoclipart.
c The Eurographics Association 2009.