Introduction To Photogrammetry
Introduction To Photogrammetry
Introduction To Photogrammetry
02
Introduction to Photogrammetry
T. Schenk
schenk.2@osu.edu
Autumn Quarter 2005
Contents
1
Introduction
1.1 Preliminary Remarks . . . . . . . . . . . . . . . . .
1.2 Definitions, Processes and Products . . . . . . . . .
1.2.1 Data Acquisition . . . . . . . . . . . . . . .
1.2.2 Photogrammetric Products . . . . . . . . . .
Photographic Products . . . . . . . . . . . .
Computational Results . . . . . . . . . . . .
Maps . . . . . . . . . . . . . . . . . . . . .
1.2.3 Photogrammetric Procedures and Instruments
1.3 Historical Background . . . . . . . . . . . . . . . .
Film-based Cameras
2.1 Photogrammetric Cameras . . . . . . . .
2.1.1 Introduction . . . . . . . . . . . .
2.1.2 Components of Aerial Cameras .
Lens Assembly . . . . . . . . . .
Inner Cone and Focal Plane . . .
Outer Cone and Drive Mechanism
Magazine . . . . . . . . . . . . .
2.1.3 Image Motion . . . . . . . . . . .
2.1.4 Camera Calibration . . . . . . . .
2.1.5 Summary of Interior Orientation .
2.2 Photographic Processes . . . . . . . . . .
2.2.1 Photographic Material . . . . . .
2.2.2 Photographic Processes . . . . . .
Exposure . . . . . . . . . . . . .
Sensitivity . . . . . . . . . . . .
Colors and Filters . . . . . . . . .
Processing Color Film . . . . . .
2.2.3 Sensitometry . . . . . . . . . . .
2.2.4 Speed . . . . . . . . . . . . . . .
2.2.5 Resolving Power . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
3
4
5
5
5
6
6
7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
11
11
12
12
13
14
14
14
16
19
20
20
21
21
22
22
23
23
25
26
ii
CONTENTS
Digital Cameras
3.1 Overview . . . . . . . . . . . . . . . . . . . .
3.1.1 Camera Overview . . . . . . . . . . .
3.1.2 Multiple frame cameras . . . . . . . .
3.1.3 Line cameras . . . . . . . . . . . . . .
3.1.4 Camera Electronics . . . . . . . . . . .
3.1.5 Signal Transmission . . . . . . . . . .
3.1.6 Frame Grabbers . . . . . . . . . . . . .
3.2 CCD Sensors: Working Principle and Properties
3.2.1 Working Principle . . . . . . . . . . .
3.2.2 Charge Transfer . . . . . . . . . . . . .
Linear Array With Bilinear Readout . .
Frame Transfer . . . . . . . . . . . . .
Interline Transfer . . . . . . . . . . . .
3.2.3 Spectral Response . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
29
29
30
31
31
32
34
34
34
35
37
37
37
37
38
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
41
41
41
42
42
43
43
43
45
46
47
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
49
50
50
52
52
52
53
54
55
56
57
59
61
61
61
61
63
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
5.5.3
5.5.4
5.5.5
6
iii
65
66
67
Measuring Systems
6.1 Analytical Plotters . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . .
Stereo Viewer . . . . . . . . . . . . . . . . . . . . . . . . . .
Translation System . . . . . . . . . . . . . . . . . . . . . . .
Measuring and Recording System . . . . . . . . . . . . . . .
User Interface . . . . . . . . . . . . . . . . . . . . . . . . . .
Electronics and Real-Time Processor . . . . . . . . . . . . .
Host Computer . . . . . . . . . . . . . . . . . . . . . . . . .
Auxiliary Devices . . . . . . . . . . . . . . . . . . . . . . . .
6.1.3 Basic Functionality . . . . . . . . . . . . . . . . . . . . . . .
Model Mode . . . . . . . . . . . . . . . . . . . . . . . . . .
Comparator Mode . . . . . . . . . . . . . . . . . . . . . . .
6.1.4 Typical Workflow . . . . . . . . . . . . . . . . . . . . . . . .
Definition of System Parameters . . . . . . . . . . . . . . . .
Definition of Auxiliary Data . . . . . . . . . . . . . . . . . .
Definition of Project Parameters . . . . . . . . . . . . . . . .
Interior Orientation . . . . . . . . . . . . . . . . . . . . . . .
Relative Orientation . . . . . . . . . . . . . . . . . . . . . .
Absolute Orientation . . . . . . . . . . . . . . . . . . . . . .
6.1.5 Advantages of Analytical Plotters . . . . . . . . . . . . . . .
6.2 Digital Photogrammetric Workstations . . . . . . . . . . . . . . . . .
6.2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . .
Digital Photogrammetric Workstation and Digital Photogrammetry Environment . . . . . . . . . . . . . . . . .
6.2.2 Basic System Components . . . . . . . . . . . . . . . . . . .
6.2.3 Basic System Functionality . . . . . . . . . . . . . . . . . .
Storage System . . . . . . . . . . . . . . . . . . . . . . . . .
Viewing and Measuring System . . . . . . . . . . . . . . . .
Stereoscopic Viewing . . . . . . . . . . . . . . . . . . . . . .
Roaming . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3 Analytical Plotters vs. DPWs . . . . . . . . . . . . . . . . . . . . . .
71
71
71
71
72
72
73
74
75
76
76
76
76
77
77
77
78
78
78
79
79
79
79
81
81
82
84
85
86
88
90
94
Chapter 1
Introduction
1.1
Preliminary Remarks
This course provides a general overview of photogrammetry, its theory and general
working principles with an emphasis on concepts rather than detailed operational knowledge.
Photogrammetry is an engineering discipline and as such heavily influenced by
developments in computer science and electronics. The ever increasing use of computers
has had and will continue to have a great impact on photogrammetry. The discipline is,
as many others, in a constant state of change. This becomes especially evident in the
shift from analog to analytical and digital methods.
There has always been what we may call a technological gap between the latest
findings in research on one hand and the implementation of these results in manufactured products; and secondly between the manufactured product and its general use in
an industrial process. In that sense, photogrammetric practice is an industrial process.
A number of organizations are involved in this process. Inventions are likely to be
associated with research organizations, such as universities, research institutes and the
research departments of industry. The development of a product based on such research
results is a second phase and is carried out, for example, by companies manufacturing
photogrammetric equipment. Between research and development there are many similarities, the major difference being the fact that the results of research activities are
not known beforehand; development goals on the other hand, are accurately defined in
terms of product specifications, time and cost.
The third partner in the chain is the photogrammetrist: he daily uses the instruments
and methods and gives valuable feedback to researchers and developers. Fig. 1.1 illustrates the relationship among the different organizations and the time elapsed from the
moment of an invention until it becomes operational and available to the photogrammetric practice.
Analytical plotters may serve as an example for the time gap discussed above.
Invented in the late fifties, they were only manufactured in quantities nearly twenty
years later; they are in wide spread use since the early eighties. Another example
RESEARCH
DEVELOPMENT
USE
time gap
availability
1 Introduction
invention
Figure 1.1: Time gap between research, development and operational use of a new
method or instrument.
is aerial triangulation. The mathematical foundation was laid in the fifties, the first
programs became available in the late sixties, but it took another decade before they
were widely used in the photogrammetric practice.
There are only a few manufacturers of photogrammetric equipment. The two leading
companies are Leica (a recent merger of the former Swiss companies Wild and Kern),
and Carl Zeiss of Germany (before unification there were two separate companies: Zeiss
Oberkochen and Zeiss Jena).
Photogrammetry and remote sensing are two related fields. This is also manifest
in national and international organizations. The International Society of Photogrammetry and Remote Sensing (ISPRS) is a non-governmental organization devoted to
the advancement of photogrammetry and remote sensing and their applications. It was
founded in 1910. Members are national societies representing professionals and specialists of photogrammetry and remote sensing of a country. Such a national organization
is the American Society of Photogrammetry and Remote Sensing (ASPRS).
The principle difference between photogrammetry and remote sensing is in the application; while photogrammetrists produce maps and precise three-dimensional positions
of points, remote sensing specialists analyze and interpret images for deriving information about the earths land and water areas. As depicted in Fig. 1.2 both disciplines are
also related to Geographic Information Systems (GIS) in that they provide GIS with
essential information. Quite often, the core of topographic information is produced by
photogrammetrists in form of a digital map.
ISPRS adopted the metric system and we will be using it in this course. Where
appropriate, we will occasionally use feet, particularly in regards to focal lengths of
cameras. Despite considerable effort there is, unfortunately, not a unified nomenclature.
We follow as closely as possible the terms and definitions laid out in (1). Students who
are interested in a more thorough treatment about photogrammetry are referred to (2),
(3), (4), (5). Finally, some of the leading journals are mentioned. The official journal
published by ISPRS is called Photogrammetry and Remote Sensing. ASPRS journal,
Photogrammetric Engineering and Remote Sensing, PERS, appears monthly, while
Photogrammetric Record, published by the British Society of Photogrammetry and
Remote Sensing, appears six times a year. Another renowned journal is Zeitschrift fr
photogrammetry
GIS
remote sensing
photogrammetry
object space
GIS
data fusion
remote sensing
1.2
1 Introduction
data acquisition
photogrammetric procedures
photogrammetric products
photographic products
enlargements/reductions
rectifier
orthophoto projector
comparator
stereoplotter
rectifications
orthophotos
points
scanner
analytical plotter
maps
topographic maps
special maps
softcopy workstation
1.2.1
Data Acquisition
Table 1.1: Different areas of specialization of photogrammetry, their objects and sensor
platforms.
object
planet
earths surface
industrial part
historical building
human body
sensor platform
space vehicle
airplane
space vehicle
tripod
tripod
tripod
specialization
space photogrammetry
aerial photogrammetry
industrial photogrammetry
architectural photogrammetry
biostereometrics
airplanes as the most common platforms. Table 1.1 summarizes the different objects
and platforms and associates them to different applications of photogrammetry.
1.2.2
Photogrammetric Products
The photogrammetric products fall into three categories: photographic products, computational results, and maps.
Photographic Products
Photographic products are derivatives of single photographs or composites of overlapping photographs. Fig. 1.4 depicts the typical case of photographs taken by an aerial
camera. During the time of exposure, a latent image is formed which is developed to
a negative. At the same time diapositives and paper prints are produced. Enlargements
may be quite useful for preliminary design or planning studies. A better approximation
to a map are rectifications. A plane rectification involves just tipping and tilting the
diapositive so that it will be parallel to the ground. If the ground has a relief, then the
rectified photograph still has errors. Only a differentially rectified photograph, better
known as orthophoto, is geometrically identical with a map.
Composites are frequently used as a first base for general planning studies. Photomosaics are best known, but composites with orthophotos, called orthophoto maps are
also used, especially now with the possibility to generate them with methods of digital
photogrammetry.
Computational Results
Aerial triangulation is a very successful application of photogrammetry. It delivers 3-D
positions of points, measured on photographs, in a ground control coordinate system,
e.g., state plane coordinate system.
Profiles and cross sections are typical products for highway design where earthwork
quantities are computed. Inventory calculations of coal piles or mineral deposits are
1 Introduction
negative
f
perspective center
-f
reduction
diapositive
rectification
enlargement
ground
other examples which may require profile and cross section data. The most popular form
for representing portions of the earths surface is the DEM (Digital Elevation Model).
Here, elevations are measured at regularly spaced grid points.
Maps
Maps are the most prominent product of photogrammetry. They are produced at various
scales and degrees of accuracies. Planimetric maps contain only the horizontal position
of ground features while topographic maps include elevation data, usually in the form
of contour lines and spot elevations. Thematic maps emphasize one particular feature,
e.g., transportation network.
1.2.3
projection
data
information
photograph
central
0.5 GB
explicit
map
orthogonal
few KB
implicit
task
transformations
feature identification
and feature extraction
photograph. A map depicting the same scene will only have a few thousand bytes of
data. Consequently, another important task is data reduction.
The information we want to represent on a map is explicit. By that we mean that all
data is labeled. A point or a line has an attribute associated which says something about
the type and meaning of the point or line. This is not the case for an image; a pixel has
no attribute associate with it which would tell us what feature it belongs to. Thus, the
relevant information is only implicitly available. Making information explicit amounts
to identifying and extracting those features which must be represented on the map.
Finally, we refer back to Fig. 1.3 and point out the various instruments that are
used to perform the tasks described above. A rectifier is kind of a copy machine for
making plane rectifications. In order to generate orthophotos, an orthophoto projector
is required. A comparator is a precise measuring instrument which lets you measure
points on a diapositive (photo coordinates). It is mainly used in aerial triangulation. In
order to measure 3-D positions of points in a stereo model, a stereo plotting instrument
or stereo plotter for short, is used. It performs the transformation central projection to
orthogonal projection in an analog fashion. This is the reason why these instruments
are sometimes less officially called analog plotters. An analytical plotter establishes
the transformation computationally. Both types of plotters are mainly used to produce
maps, DEMs and profiles.
A recent addition to photogrammetric instruments is the softcopy workstation. It is
the first tangible product of digital photogrammetry. Consequently, it deals with digital
imagery rather than photographs.
1.3
Historical Background
1 Introduction
1950
analytical photogr.
analog photogrammetry
2000
digital
photogrammetry.
invention of computer
invention of airplane
first generation
1900
1850
invention of photography
REFERENCES
References
[1] Multilingual Dictionary of Remote Sensing and Photogrammetry, ASPRS, 1983,
p. 343.
[2] Manual of Photogrammetry, ASPRS, 4th Ed., 1980, p. 1056.
[3] Moffit, F.H. and E. Mikhail, 1980. Photogrammetry, 3rd Ed., Harper & Row
Publishers, NY.
[4] Wolf, P., 1980. Elements of Photogrammetry, McGraw Hill Book Co, NY.
[5] Kraus, K., 1994. Photogrammetry, Verd. Dmmler Verlag, Bonn.
10
1 Introduction
Chapter 2
Film-based Cameras
2.1
2.1.1
Photogrammetric Cameras
Introduction
In the beginning of this chapter we introduced the term sensing device as a generic
name for sensing and recording radiometric energy (see also Fig. 2.1). Fig. 2.1 shows
a classification of the different types of sensing devices.
An example of an active sensing device is radar. An operational system sometimes
used for photogrammetric applications is the side looking airborne radar (SLAR). Its
chief advantage is the fact that radar waves penetrate clouds and haze. An antenna,
attached to the belly of an aircraft directs microwave energy to the side, rectangular
to the direction of flight. The incident energy on the ground is scattered and partially
reflected. A portion of the reflected energy is received at the same antenna. The time
elapsed between energy transmitted and received can be used to determine the distance
between antenna and ground.
Passive systems fall into two categories: image forming systems and spectral data
systems. We are mainly interested in image forming systems which are further subdivided into framing systems and scanning systems. In a framing system, data are
acquired all at one instant, whereas a scanning system obtains the same information
sequentially, for example scanline by scanline. Image forming systems record radiant
energy at different portions of the spectrum. The spatial position of recorded radiation
refers to a specific location on the ground. The imaging process establishes a geometric
and radiometric relationship between spatial positions of object and image space.
Of all the sensing devices used to record data for photogrammetric applications,
the photographic systems with metric properties are the most frequently employed.
They are grouped into aerial cameras and terrestrial cameras. Aerial cameras are also
called cartographic cameras. In this section we are only concerned with aerial cameras.
Panoramic cameras are examples of non-metric aerial cameras. Fig. 2.2(a) depicts an
aerial camera.
12
2 Film-based Cameras
Sensing devices
active systems
passive systems
framing systems
scanning systems
photographic systems
aerial cameras
multispectral
scanners
electron imagers
terrestrial cameras
2.1.2
A typical aerial camera consists of lens assembly, inner cone, focal plane, outer cone,
drive mechanism, and magazine. These principal parts are shown in the schematic
diagram of Fig. 2.2(b).
Lens Assembly
The lens assembly, also called lens cone, consists of the camera lens (objective), the
diaphragm, the shutter and the filter. The diaphragm and the shutter control the exposure.
The camera is focused for infinity; that is, the image is formed in the focal plane.
Fig. 2.3 shows cross sections of lens cones with different focal lengths. Superwide-angle lens cones have a focal length of 88 mm (3.5 in). The other extreme are
narrow-angle cones with a focal length of 610 mm (24 in). Between these two extremes
are wide-angle, intermediate-angle, and normal-angle lens cones, with focal lengths of
153 mm (6 in), 213 mm (8.25 in), and 303 mm (12 in), respectively. Since the film
format does not change, the angle of coverage, or field for short, changes, as well as the
13
Figure 2.2: (a) Aerial camera Aviophot RC20 from Leica; (b) schematic diagram of
aerial camera.
Table 2.1: Data of different lens assemblies.
superwide
88.
119.
7.2
50.4
wideangle
153.
82.
4.0
15.5
intermediate
210.
64.
2.9
8.3
normalangle
305.
46.
2.0
3.9
narrowangle
610.
24.
1.0
1.0
scale. The most relevant data are compiled in Table 2.1. Refer also to Fig. 2.4 which
illustrates the different configurations.
Super-wide angle lens cones are suitable for medium to small scale applications
because the flying height, H, is much lower compared to a normal-angle cone (same
photo scale assumed). Thus, the atmospheric effects, such as clouds and haze, are
much less a problem. Normal-angle cones are preferred for large-scale applications of
urban areas. Here, a super-wide angle cone would generate much more occluded areas,
particularly in built-up areas with tall buildings.
Inner Cone and Focal Plane
For metric cameras it is very important to keep the lens assembly fixed with respect to
the focal plane. This is accomplished by the inner cone. It consists of a metal with
low coefficient of thermal expansion so that the lens and the focal plane do not change
their relative position. The focal plane contains fiducial marks, which define the fiducial
coordinate system that serves as a reference system for metric photographs. The fiducial
marks are either located at the corners or in the middle of the four sides.
Usually, additional information is printed on one of the marginal strips during the
14
2 Film-based Cameras
2.1.3
Image Motion
During the instance of exposure, the aircraft moves and with it the camera, including the
image plane. Thus, a stationary object is imaged at different image locations, and the
image appears to move. Image motion results not only from the forward movement of
the aircraft but also from vibrations. Fig. 2.5 depicts the situation for forward motion.
An airplane flying with velocity v advances by a distance D = v t during the
exposure time t. Since the object on the ground is stationary, its image moves by a
15
d
normal angle
super-wide angle
f
perspective center
ground coverage
Figure 2.4: Angular coverage, photo scale and ground coverage of cameras with different
focal lengths.
vtf
vt
=
m
H
(2.1)
1/300 sec
300 km/h
150 mm
1500 m
28 m
Image motion caused by vibrations in the airplane can also be computed using
Eq. 2.1. For that case, vibrations are expressed as a time rate of change of the camera
axis (angle/sec). Suppose the camera axis vibrates by 20 /sec. This corresponds to a
distance Dv = 2 H/ = 52.3 m. Since this displacement" occurs in one second, it can
be considered a velocity. In our example, this velocity is 188.4 km/sec, corresponding
to an image motion of 18 m. Note that in this case, the direction of image motion is
random.
As the example demonstrates, image motion may considerably decrease the image
quality. For this reason, modern aerial cameras try to eliminate image motion. There
are different mechanical/optical solutions, known as image motion compensation. The
forward image motion can be reduced by moving the film during exposure such that the
16
2 Film-based Cameras
d
image of an object does not move with respect to the emulsion. Since the direction of
image motion caused by vibration is random, it cannot be compensated by moving the
film. The only measure is a shock absorbing camera mount.
2.1.4
Camera Calibration
During the process of camera calibration, the interior orientation of the camera is
determined. The interior orientation data describe the metric characteristics of the
camera needed for photogrammetric processes. The elements of interior orientation
are:
1. The position of the perspective center with respect to the fiducial marks.
2. The coordinates of the fiducial marks or distances between them so that coordinates can be determined.
3. The calibrated focal length of the camera.
4. The radial and discentering distortion of the lens assembly, including the origin
of radial distortion with respect to the fiducial system.
5. Image quality measures such as resolution.
There are several ways to calibrate the camera. After assembling the camera, the
manufacturer performs the calibration under laboratory conditions. Cameras should be
calibrated once in a while because stress, caused by temperature and pressure differences
of an airborn camera, may change some of the interior orientation elements. Laboratory
calibrations are also performed by specialized government agencies.
17
Figure 2.6: Two views of a goniometer with installed camera, ready for calibration.
Now, the measurement part of the calibration procedure begins. The telescope is
aimed at the grid intersections of the grid plate, viewing through the camera. The angles
subtended at the rear nodal point between the camera axis and the grid intersections
are obtained by subtracting from the circle readings the zero position (reading to the
collimator before the camera is installed). This is repeated for all grid intersections
along the four semi diagonals.
18
2 Film-based Cameras
Having determined the angles i permits to compute the distances di from the center
of the grid plate (PPA) to the corresponding grid intersections i by Eq. 2.2
di
dri
= f tan(i )
(2.2)
= dgi di
(2.3)
The computed distances di are compared with the known distances dgi of the grid
plate. The differences dri result from the radial distortion of the lens assembly. Radial
distortion arises from a change of lateral magnification as a function of the distance
from the center.
The differences dri are plotted against the distances di . Fig. 2.7(a) shows the result.
The curves for the four semi diagonals are quite different and it is desirable to make
them as symmetrical as possible to avoid working with four sets of distortion values.
This is accomplished by changing the origin from the PPA to a different point, called
the principal point of symmetry (PPS). The effect of this change of the origin is shown
in Fig. 2.7(b). The four curves are now similar enough and the average curve represents
the direction-independent distortion. The distortion values for this average curve are
denoted by dri .
Figure 2.7: Radial distortion curves for the four semi-diagonals (a). In (b) the curves
are made symmetrical by shifting the origin to PPS. The final radial distortion curve in
(c) is obtained by changing the focal length from f to c.
The average curve is not yet well balanced with respect to the
axis. The
horizontal
next step involves a rotation of the distortion curve such that drmin = |drmax |. A
change of the focal length will rotate the average curve. The focal length with this
desirable property is called calibrated focal length, c. Through the remainder of the
text, we will be using c instead of f , that is, we use the calibrated focal length and not
the optical focal length.
After completion of all measurements, the grid plate is replaced by a photosensitive
plate. The telescope is rotated to the zero position and the reticule is projected through
19
the lens onto the plate where it marks the PPA. At the same time the fiducial marks are
exposed. The processed plate is measured and the position of the PPA is determined
with respect to the fiducial marks.
2.1.5
Figure 2.8: Illustration of interior orientation. EP and AP are entrance and exit pupils.
they intersect the optical axis at the perspective centers O and Op . The mathematical
perspective center Om is determined such that angles at O and Om become as similar
as possible. Point Ha , also known as principal point of autocollimation, PPA, is the
vertical drop of Om to the image plane B. The distance Om , Ha , c, is the calibrated
focal length.
1. The position of the perspective center is given by the PPA and the calibrated focal
length c. The bundle rays through projection center and image points resemble
most closely the bundle in object space, defined by the front nodal point and
points on the ground.
2. The radial distortion curve contains the information necessary for correcting image points that are displaced by the lens due to differences in lateral magnification.
The origin of the symmetrical distortion curve is at the principal point of symmetry
PPS. The distortion curve is closely related to the calibrated focal length.
3. The position of the PPA and PPS is fixed with reference to the fiducial system.
The intersection of opposite fiducial marks indicates the fiducial center FC. The
20
2 Film-based Cameras
three centers lie within a few microns. The fiducial marks are determined by
distances measured along the side and diagonally.
Modern aerial cameras are virtually distortion free. A good approximation for the
interior orientation is to assume that the perspective center is at a distance c from the
fiducial center.
2.2
Photographic Processes
The most widely used detector system for photogrammetric applications is based on
photographic material. It is analog system with some unique properties which makes it
superior to digital detectors such as CCD arrays. An aerial photograph contains on the
order of one Gigabyte of data (see Chapter 1); the most advanced semiconductor chips
have a resolution of 2K 2K, or 4 MB of data.
In this section we provide an overview of photographic processes and properties
of photographic material. The student should gain a basic understanding of exposure,
sensitivity, speed and resolution of photographic emulsions.
Fig. 2.9 provides an overview of photographic processes and introduces the terms
latent image, negative, (dia)positive and paper print.
copying
fixing
drying
washing
negative
developing
diapositive
paper print
processing
latent image
object
exposing
2.2.1
Photographic Material
21
2.2.2
Photographic Processes
Exposure
Exposure H is defined as the quantity of radiant energy collected by the emulsion.
H=Et
(2.4)
where E is the irradiance as defined in section 2.1.4, and t the exposure time. H is
determined by the exposure time and the aperture stop of the the lens system (compare
vignetting diagrams in Fig. 2.16). For fast moving platforms (or objects), the exposure
time should be kept short to prevent blurring. In that case, a small f-number must be
chosen so that enough energy interacts with the emulsion. The disadvantage with this
setting is an increased influence of aberrations.
The sensitive elements of the photographic emulsion are microscopic crystals with
diameters from 0.3 m to 3.0 m. One crystal is made up of 1010 silver halide ions.
When radiant energy is incident upon the emulsion it is either reflected, refracted or
absorbed. If the energy of the photons is sufficient to liberate an electron from a bound
state to a mobile state then it is absorbed, resulting in a free electron which combines
quickly with a silver halide ion to a silver atom.
The active product of exposure is a small aggregate of silver atoms on the surface
or in the interior of the crystal. This silver speck acts as a catalyst for the development
reaction where the exposed crystals are completely reduced to silver whereas the unexposed crystals remain unchanged. The exposed but undeveloped film is called latent
image. In the most sensitive emulsions only a few photons are necessary for forming a
22
2 Film-based Cameras
developable image. Therefore the amplifying factor is on the order of 109 , one of the
largest amplifications known.
Sensitivity
The sensitivity can be defined as the extent of photographic material to react to radiant
energy. Since this is a function of wavelength, sensitivity is a spectral quantity. Fig. 2.11
provides an overview of emulsions with different sensitivity.
0.3
0.4
0.5
0.6
0.7
0.8
0.9
wavelength
color blind
orthochromatic
panchromatic
infrared
Silver halide emulsions are inherently only sensitive to ultra violet and blue. In
order for the silver halide to absorb energy at longer wavelengths, dyes are added. The
three color sensitive emulsion layers differ in the dyes that are added to silver halide.
If no dyes are added the emulsion is said to be color blind. This may be desirable for
paper prints because one can work in the dark room with red light without affecting the
latent image. Of course, color blind emulsions are useless for aerial film because they
would only react to blue light which is scattered most causing a diffuse image without
contrast.
In orthochromatic emulsions the sensitivity is extended to include the green portion
of the visible spectrum. Panchromatic emulsions are sensitive to the entire visible
spectrum; infrared film includes the near infrared.
Colors and Filters
The visible spectrum is divided into three categories: 0.4 to 0.5 m, 0.5 to 0.6 m,
and 0.6 to 0.7 m. These three categories are associated to the primary colors of
blue, green and red. All other colors, approximately 10 million, can be obtained by
an additive mixture of the primary colors. For example, white is a mixture of equal
portions of primary colors. If two primary colors are mixed the three additive colors
cyan, yellow and magenta are obtained. As indicated in Table 2.2, these additive colors
also result from subtracting the primary colors from white light.
23
additive mixture of
2 color primaries
b+g
g+r
r+b
subtraction from
white light
w-r
w-b
w-g
Subtraction can be achieved by using filters. A filter with a subtractive color primary
is transparent for the additive primary colors. For example, a yellow filter is transparent
for green and red. Such a filter is also called minus blue filter. A combination of filters
is only transparent for that color the filters have in common. Cyan and magenta is
transparent for blue since this is their common primary color.
Filters play a very important role in obtaining aerial photography. A yellow filter,
for example, prevents scattered light (blue) from interacting with the emulsion. Often, a
combination of several filters is used to obtain photographs of high image quality. Since
filters reduce the amount of radiant energy incident the exposure must be increased by
either decreasing the f-number, or by increasing the exposure time.
Processing Color Film
Fig. 2.12 illustrates the concept of natural color and false color film material. A natural
color film is sensitive to radiation of the visible spectrum. The layer that is struck first
by radiation is sensitive to red, the middle layer is sensitive to green, and the third layer
is sensitive to blue. During the development process the situation becomes reversed;
that is, the red layer becomes transparent for red light. Wherever green was incident
the red layer becomes magenta (white minus green); likewise, blue changes to yellow.
If this developed film is viewed under white light, the original colors are perceived.
A closer examination of the right side of Fig. 2.12 reveals that the sensitivity of
the film is shifted towards longer wavelengths. A yellow filter prevents blue light from
interacting with the emulsion. The top most layer is now sensitive to near infrared, the
middle layer to red and the third layer is sensitive to green. After developing the film,
red corresponds to infrared, green to red, and blue to green. This explains the name
false color film: vegetation reflects infrared most. Hence, forest, trees and meadows
appear red.
2.2.3
Sensitometry
24
2 Film-based Cameras
irradiance
R
irradiance
B
IR
latent image
red-sensitive layer
green-sensitive layer
blue-sensitive layer
developed image
cyan layer
magenta layer
yellow layer
natural color
false color
Figure 2.12: Concept of processing natural color (left) and false color film (right
.
D
O
T
H
log(O)
Ei
=
Et
Et
1
=
=
Ei
O
= Et
(2.5)
(2.6)
(2.7)
(2.8)
where
O
Ei
Et
T
H
25
3.0
4
density
3
2.0
1.0
2
1
fog
1.0
2.0
3.0
log exposure
with silver specks that are reduced to black silver: a bright spot in the the scene appears
dark in the negative.
The characteristic curve begins at a threshold value, called fog. An unexposed film
should be totally transparent when reduced during the development process. This is not
the case because the base of the film has a transmittance smaller than unity. Additionally,
the transmittance of the emulsion with unexposed material is smaller than unity. Both
factor contribute to fog. The lower part of the curve, between point 1 and 2, is called toe
region. Here, the exposure is not enough to cause a readable image. The next region,
corresponding to correct exposure, is characterized by a straight line (between point 2
and 3). That is, the density increases linearly with the logarithm of exposure. The slope
of the straight line is called gamma or contrast. A film with a slope of 450 is perceived
as truly presenting the contrast in the scene. A film with a higher gamma exaggerates
the scene contrast. The contrast is not only dependent on the emulsion but also on the
development time. If the same latent image is kept longer in the development process,
its characteristic curve becomes flatter.
The straight portion of the characteristic curve ends in the shoulder region where
the density no longer increases linearly. In fact, there is a turning point, solarization,
where D decreases with increasing exposure (point 4 in Fig. 2.13). Clearly, this region
is associated with over exposure.
2.2.4
Speed
The size and the density of the silver halide crystals suspended in the gelatine of the
emulsion vary. The larger the crystal size the higher the probability that is struck by
photons during the exposure time. Fewer photons are necessary to cause a latent image.
Such a film would be called faster because the latent image is obtained in a shorter
time period compared to an emulsion with smaller crystal size. In other words, a faster
26
2 Film-based Cameras
3.0
density
B
2.0
1.0
0.3
1.0
HA
HB
2.0
3.0
log exposure
2.2.5
Resolving Power
The image quality is directly related to the size and distribution of the silver halide
crystals and the dyes suspended in the emulsion. The crystals are also called corn,
and the corn size corresponds to the diameter of the crystal. Granularity refers to the
size and distribution, concentration to the amount of light-sensitive material per unit
volume. Emulsions are usually classified as fine-, medium-, or coarse-grained.
The resolving power of an emulsion refers to the number of alternating bars and
spaces of equal width which can be recorded as visually separate elements in the space
of one millimeter. A bar and a space is called a line or line pair. A resolving power of
50 l/mm means that 50 bars, separated by 50 spaces, can be discerned per millimeter.
Fig. 2.15 shows a typical test pattern used to determine the resolving power.
27
Figure 2.15: Typical test pattern (three-bar target) for determining resolving power.
The three-bar target shown in Fig. 2.15 is photographed under laboratory conditions
using a diffraction-limited objective with large aperture (to reduce the effect of the
optical system on the resolution). The resolving power is highly dependent on the
target contrast. Therefore, targets with different contrast are used. High contrast targets
have perfectly black bars, separated by white spaces, whereas lower contrast targets
have bars and spaces with varying grey shades. In the Table below are listed some
aerial films with their resolving powers.
Note that there is an inverse relationship between speed and resolving power: coarsegrained films are fast but have a lower resolution than fine-grained aerial films.
28
2 Film-based Cameras
designation
Agfa
Kodak
Kodak
Kodak
Kodak
Aviophot Pan
Plus-X Aerographic
High Definition
Infrared Aerographic
Aerial Color
speed (AFS)
160
6.4
320
6
resolution [l/mm]
contrast contrast
1000 : 1
1.6 : 1
133
100
50
630
250
80
40
200
100
gamma
1.0 - 1.4
1.3
1.3
2.3
Chapter 3
Digital Cameras
3.1
Overview
The popular term digital camera" is rather informal and may even be misleading because
the output is in many cases an analog signal. An more generic term is solid-state camera.
Other frequently used terms include CCD camera and solid-state camera. Though
these terms obviously refer to the type of sensing elements, they are often used in a
more generic sense.
The chief advantage of digital cameras over the classical film-based cameras is the
instant availability of images for further processing and analysis. This is essential in
real-time applications (e.g. robotics, certain industrial applications, bio-mechanics,
etc.).
Another advantage is the increased spectral flexibility of digital cameras. The major
drawback is the limited resolution or limited field of view.
Digital cameras have been used for special photogrammtric applications since the
early seventies. However, vidicon-tube cameras available at that time were not very
accurate because the imaging tubes were not stable. This disadvantage was eliminated
with the appearance of solid-state cameras in the early eighties. The charge-coupled
device provides high stability and is therefore the preferred sensing device in todays
digital cameras.
The most distinct characteristic of a digital camera is the image sensing device.
Because of its popularity we restrict the discussion to solid-state sensors, in particular
to charge coupled devices (CCD).
The sensor is glued to a ceramic substrate and covered by a glass. Typical chip
sizes are 1/2 and 2/3 inches with as many as 2048 2048 sensing elements. However,
sensors with fewer than 1K 1K elements are more common. Fig. 3.1 depicts a line
sensor (a) and a 2D sensor chip (b).
The dimension of a sensing element is smaller than 10 m, with an insulation
space of a few microns between them. This can easily be verified when considering the
physical dimensions of the chip and the number of elements.
30
3 Digital Cameras
3.1.1
Camera Overview
Fig. 3.2 depicts a functional block diagram of the major components of a solid-state
camera.
(a)
image
capture
(b)
electronic
camera
A/D
conversion
(c)
digital camera
(d)
digital camera
(e)
short term
storage
signal
processing
image
transfer
frame grabber
frame grabber
imaging board
camera on a chip
image
processing
archiving
networking
host computer
host computer
host computer
host computer
Figure 3.2: Functional block diagram of a solid-state camera. A real camera may not
have all components. The diagram is simplified, e.g. external signals received by the
camera are not shown.
The optics component includes lens assembly and filters, such as an infrared blocking
3.1 Overview
31
filter to limit the spectral response to the visible spectrum. Many cameras use a C-mount
for the lens. Here, the distance between mount and image plane is 17.526 mm. As an
option, the optics subsystem may comprise a shutter.
The most distinct characteristic of an electronic camera is the image sensing device.
Section 3.2 provides an overview of charge-coupled devices.
The solid-state sensor, positioned in the image plane, is glued on a ceramic substrate.
The sensing elements (pixels) are either arranged in a linear array or a frame array.
Linear arrays are used for aerial cameras while close range applications, including
mobile mapping systems, employ frame array cameras.
The accuracy of a solid-state camera depends a great deal on the accuracy and
stability of the sensing elements, for example on the uniformity of the sensor element
spacing and the flatness of the array. From the manufacturing process we can expect
an accuracy of 1/10th of a micron. Considering a sensor element, size 10 m, the
regularity amounts to 1/100. Camera calibration and measurements of the position
and the spacing of sensor elements confirm that the regularity is between 1/50th and
1/100th of the spacing.
The voltage generated by the sensors read out mechanism must be amplified for
further processing, which begins with converting the analog signal to a digital signal.
This is not only necessary for producing a digital output, but also for signal and image
processing. The functionality of these two components may range from rudimentary to
very sophisticated in a real camera.
You may consider the first two components (optics and solid-state sensor) as image
capture, the amplifiers and ADC as image digitization, and signal and image processing
as image restoration. A few examples illustrate the importance of image restoration.
The dark current can be measured and subtracted so that only its noise signal component
remains; defective pixels can be determined and an interpolated signal can be output; the
contrast can be changed (Gamma correction); and image compression may be applied.
The following example demonstrates the need for data compression.
3.1.2
The classical film-based cameras used in photogrammetry are often divided into aerial
and terrestrial (close-range) cameras. The same principle can be applied for digital
cameras. A digital aerial camera with a resolution comparable to a classical frame
camera must have on the order of 15, 000 15, 000 sensing elements. Such image
sensors do not (yet) exist. Two solutions exist to overcome this problem: line cameras
and multiple cameras, housed in one camera body.
Fig. 3.3 shows an example of a multi-camera system (UltraCam from Vexcel). It
consists of 8 different cameras that are mounted in a common camera frame. The ground
coverage of each of these frame cameras slightly overlaps and the 8 different images
are merged together to one uniform frame image by way of image processing.
3.1.3
Line cameras
An alternative solution to frame cameras are the so called line cameras of which the
3-line camera is the most popular one. The 3-line camera employs three linear areas
32
3 Digital Cameras
which are mounted in the image plane in fore, nadir and aft position (see Fig. 3.4(a).
With this configuration, triple coverage of the surface is obtained. Examples of 3-line
cameras include Leicas ADS40. It is also possible to implement the multiple line
concept by having convergent lenses for every line, as depicted in Fig. 3.4(b).
A well-known example of a one line-camera is SPOT. The linear array consists of
7,000 sensing elements. Stereo is obtained by overlapping strips obtained from adjacent
orbits.
Fig. 3.5 shows the overlap configuration obtained with a 3-Line camera.
3.1.4
Camera Electronics
The camera electronics contains the power supply, a video timing and a sensor clock
generator. Additional components are dedicated to special signal processing tasks, such
as noise reduction, high frequency cross-talk removal and black level stabilization. A
true" digital camera would have an analog-to-digital converter which samples the video
signal with the frequency of the sensor element clock.
The camera electronics may have additional components which serve the purpose
to increase the cameras functionality. An example is the acceptance of external sync
which allows to synchronize the camera with other devices. This would allow for
multiple camera setups with uniform sync.
Cameras with mechanical (or LCD) shutters need appropriate electronics to read
3.1 Overview
33
Figure 3.4: Schematic diagram of a 3-line camera. In (a), 3 sensor lines are mounted
on the image plane in fore, nadir and aft locations. An alternative solution is using 3
convergent cameras, each with a single line mounted in the center (b).
34
3 Digital Cameras
3.1.5
Signal Transmission
The signal transmission follows the video standards. Unfortunately, there is no such
thing as a uniform video standard used worldwide. The first standard dates back to 1941
when the National Television Systems Committee (NTSC) defined RS-170 for blackand-white television. This standard is used in North America, parts of South America,
in Japan and the Philippines. European countries developed other standards, e.g. PAL
(phase alternate line) and SECAM (sequential color and memory). Yet another standard
for black-and-white television was defined by CCIR (Commit Consultatif International
des Radiocommunications). It differs only slightly to the NTSC standard, however.
Both, the R-170 and CCIR standard use the principle of interlacing. Here, the image,
called a frame, consists of two fields. The odd field contains the odd line numbers, the
even field the even line numbers. This technique is known from video monitors.
3.1.6
Frame Grabbers
Frame grabbers receive the video signal, convert it, buffer data and output it to the storage
device of the digital image. The analog front end of a frame grabber preprocesses the
video signal and passes it to the AD converter. The analog front end must cope with
different signals (e.g. different voltage level and impedance).
3.2
30
sor
sen
size
10M
18
1M
10
100K
pixe
l siz
10K
1975
1980
1985
1990
1995
100M
2000
35
Fig. 3.6 on the preceding page illustrates the astounding development of CCD sensors
over a period of 25 years. The sensor size in pixels is usually loosely termed resolution,
giving rise to confusion since this term has a different meaning in photogrammetry1 .
photon
electrode
e h
(a)
EMR
insulator
semiconductor
(b)
Figure 3.7: Schematic diagram of CCD detector. In (a) a photon with an energy greater
than the bandgap of the semiconductor generates an electron-hole pair. The electron e
is attracted by the positive voltage of the electrode while the mobile hole moves toward
the ground. The collected electrons together with the electrode form a capacitor. In (b)
this basic arrangement is repeated many times to form a linear array.
Suppose EMR is incident on the device. Photons with an energy greater than the
band gap energy of the semiconductor may be absorbed in the depletion region, creating
an electron-hole pair. The electronreferred to as photon electronis attracted by the
positive charge of the metal electrode and remains in the depletion region while the
mobile hole moves toward the electrical ground. As a result, a charge accumulates at
opposite sides of the insulator. The maximum charge depends on the voltage applied
to the electrode. Note that the actual charge is proportional to the number of absorbed
photons under the electrode.
The band gap energy of silicon corresponds to the energy of a photon with a wavelength of 1.1 m. Lower energy photons (but still exceeding the band gap) may penetrate
the depletion region and be absorbed outside. In that case, the generated electron-hole
pair may recombine before the electron reaches the depletion region. We realize that
not every photon generates an electron that is accumulated at the capacitor site. Consequently, the quantum efficiency is less than unity.
1 Resolution refers to the minimum distance between two adjacent features, or the minimum size of a
feature, which can be detected by photogrammetric data acquisition systems. For photography, this distance
is usually expressed in line pairs per millimeter (lp/mm).
36
3 Digital Cameras
An ever increasing number of capacitors are arranged into what is called a CCD array. Fig. 3.7(b) illustrates the concept of a one-dimensional array (called a linear array)
that may consist of thousands of capacitors, each of which holds a charge proportional
to the irradiance at each site. It is customary to refer to these capacitor sites as detector
pixels, or pixels for short. Two-dimensional pixel arrangements in rows and columns
are called full-frame or staring arrays.
pulse
drain
0 t
1 t
2 t
Figure 3.8: Principle of charge transfer. The top row shows a linear array of accumulated charge packets. Applying a voltage greater than V1 of electrode 1 momentarily
pulls charge over to the second electrode (middle row). Repeating this operation in a
sequential fashion eventually moves all packets to the final electrode (drain) where the
charge is measured.
The next step is concerned with transferring and measuring the accumulated charge.
The principle is shown in Fig. 3.8. Suppose that the voltage of electrode i+1 is momentarily made larger than that of electrode i. In that case, the negative charge under
electrode i is pulled over to site i+1, below electrode i+1, provided that adjacent depletion
regions overlap. Now, a sequence of voltage pulses will cause a sequential movement
of the charges across all pixels to the drain (last electrode) where each packet of charge
can be measured. The original location of the pixel whose charge is being measured in
the drain is directly related to the time when a voltage pulse was applied.
Several ingenious solutions for transferring the charge accurately and quickly have
been developed. It is beyond the scope of this book to describe the transfer technology
in any detail. The following is a brief summary of some of the methods.
3.2.2
37
Charge Transfer
active
detectors
sense
node
Figure 3.9: Principle of linear array with bilinear readout. The accumulated charge is
transferred during one pixel clock from the active detectors to the adjacent shift registers,
from where it is read out sequentially.
Frame Transfer
You can visualize a frame transfer imager as consisting of two identical arrays. The
active array accumulates charges during integration time. This charge is then transferred
to the storage array, which must be shielded since it is also light sensitive. During the
transfer, charge is still accumulating in the active array, causing a slightly smeared
image.
The storage array is read out serially, line by line. The time necessary to read out
the storage array far exceeds the integration. Therefore, this architecture requires a mechanical shutter. The shutter offers the advantage that the smearing effect is suppressed.
Interline Transfer
Fig. 3.10 on the following page illustrates the concept of interline transfer arrays. Here,
the columns of active detectors (pixels) are separated by vertical transfer registers. The
accumulated charge in the pixels is transferred at once and then read out serially. This
again allows an open shutter operation, assuming that the read out time does not exceed
the integration time.
Since the CCD detectors of the transfer register are also sensitive to irradiance, they
must be shielded. This, in turn, reduces the effective irradiance over the chip area. The
effective irradiance is often called fill factor. The interline transfer imager as described
3 Digital Cameras
active detectors
38
sense
node
Figure 3.10: Principle of linear array with bilinear readout. The accumulated charge is
transferred during one pixel clock from the active detectors to the adjacent shift registers
from where it is read out sequentially.
here has a fill factor of 50%. Consequently, longer integration times are required to
capture an image. To increase the fill factor, microlenses may be used. In front of every
pixel is a lens that directs the light incident on an area defined by adjacent active pixels
to the (smaller) pixel.
3.2.3
Spectral Response
Silicon is the most frequently used semiconductor material. In an ideal silicon detector,
every photon exceeding the band gap ( < 1.1 m) causes a photon electron that is
collected and eventually measured. The quantum efficiency is unity and the spectral
response is represented by a step function. As indicated in Fig. 3.11, the quantum
efficiency of a real CCD sensor is less than unity for various reasons. For one, not
all the incident flux interacts with the detector (e.g. reflected by the electrode in front
illuminated sensors). Additionally, some electron-hole pairs recombine. Photons with
longer wavelengths penetrate the depletion region and cause electron-hole pairs deep
inside the silicon. Here, the probability of recombination is greater and many fewer
electrons are attracted by the capacitor. The drop in spectral response toward blue and
UV is also related to the electrode material that may become opaque for < 0.4 m.
Sensors illuminated from the back avoid diffraction and reflection problems caused
by the electrode. Therefore, they have a higher quantum efficiency than front illuminated sensors. However, the detector must be thinner, because high energy photons
are absorbed near the surfaceopposite the depletion regionand the chances of electron/hole recombination are lower with shorter diffusion length.
In order to make the detector sensitive to other spectral bands (mainly IR), detector
39
ideal silicon
0.9
quantum efficiency
0.8
0.7
back illuminated
0.6
0.5
0.4
front illuminated
0.3
0.2
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
wavelength
Figure 3.11: Spectral response of CCD sensors. In an ideal silicon detector all photons
exceeding the band gap energy generate electrons. Front illuminated sensors have a
lower quantum efficiency than back illuminated sensors because part of the incident
flux may be absorbed or redirected by the electrodes (see text for details).
material with the corresponding bandgap energy must be selected. This leads to hybrid CCD arrays where the semiconductor and the CCD mechanism are two separate
components.
40
3 Digital Cameras
Chapter 4
Properties of Aerial
Photography
4.1
Introduction
Aerial photography is the basic data source for making maps by photogrametric means.
The photograph is the end result of the data acquisition process discussed in the previous chapter. Actually, the net result of any photographic mission are the photographic
negatives. Of prime importance for measuring and interpretation are the positive reproductions from the negatives, called diapositives.
Many factors determine the quality of aerial photography, such as
design and quality of lens system
manufacturing the camera
photographic material
development process
weather conditions and sun angle during photo flight
In this chapter we describe the types of aerial photographs, their geometrical properties and relationship to object space.
4.2
Aerial photographs are usually classified according to the orientation of the camera axis,
the focal length of the camera, and the type of emulsion.
42
4.2.1
Here, we introduce the terminology used for classifying aerial photographs according
to the orientation of the camera axis. Fig. 4.1 illustrates the different cases.
true vertical photograph A photograph with the camera axis perfectly vertical (identical to plumb line through exposure center). Such photographs hardly exist in
reality.
near vertical photograph A photograph with the camera axis nearly vertical. The
deviation from the vertical is called tilt. It must not exceed mechanical limitations
of stereoplotter to accomodate it. Gyroscopically controlled mounts provide
stability of the camera so that the tilt is usually less than two to three degrees.
oblique photograph A photograph with the camera axis intentially tilted between the
vertical and horizontal. A high oblique photograph, depicted in Fig. 4.1(c) is
tilted so much that the horizon is visible on the photograph. A low oblique does
not show the horizon (Fig. 4.1(b)).
The total area photographed with obliques is much larger than that of vertical
photographs. The main application of oblique photographs is in reconnaissance.
43
4.2.3
superwide
85.
119.
wideangle
157.
82.
intermediate
210.
64.
normalangle
305.
46.
narrowangle
610.
24.
Emulsion type
4.3
We restrict the discussion about geometric properties to frame photography, that is,
photographs exposed in one instant. Furthermore, we assume central projection.
4.3.1
Definitions
Fig. 4.2 shows a diapositive in near vertical position. The following definitions apply:
perspective center C calibrated perspective center (see also camera calibration, interior orientation).
focal length c calibrated focal length (see also camera calibration, interior orientation).
principal point PP principal point of autocollimation (see also camera calibration,
interior orienttion).
camera axis C-PP axis defined by the projection center C and the principal point PP.
The camera axis represents the optical axis. It is perpendicular to the image plane
44
t
PP
ip
pl
Figure 4.2: Tilted photograph in diapositive position and ground control coordinate
system.
nadir point N also called photo nadir point, is the intersection of vertical (plumb line)
from perspective center with photograph.
ground nadir point N intersection of vertical from perspective center with the earths
surface.
tilt angle t angle between vertical and camera axis.
swing angle s is the angle at the principal point measured from the +y-axis counterclockwise to the nadir N .
azimut is the angle at the ground nadir N measured from the +Y-axis in the ground
system counterclockwise to the intersection O of the camera axis with the ground
surface. It is the azimut of the trace of the principal plane in the XY -plane of the
ground system.
principal line pl intersection of plane defined by the vertical through perspective center
and camera axis with the photograph. Both, the nadir N and the principal point
45
PP are on the principal line. The principal line is oriented in the direction of
steepest inclination of of the tilted photograph.
isocenter I is the intersection of the bisector of angle t with the photograph. It is on
the principal line.
isometric parallel ip is in the plane of photograph and is perpendicular to the principal
line at the isocenter.
true horizon line intersection of a horizontal plane through persepective center with
photograph or its extension. The horizon line falls within the extent of the photograph only for high oblique photographs.
horizon point intersection of principal line with true horizon line.
4.3.2
negative
image space
exit
pupil
entrance
object space
During the camera calibration process the projection center in image space is
changed to a new position, called the calibrated projection center. As discussed in
2.6, this is necessary to achieve close similarity between the image and object bundle.
46
4.3.3
Photo scale
We use the representative fraction for scale expressions, in form of a ratio, e.g. 1 : 5,000.
As illustrated in Fig. 4.4 the scale of a near vertical photograph can be approximated by
c
(4.1)
H
where mb is the photograph scale number, c the calibrated focal length, and H the
flight height above mean ground elevation. Note that the flight height H refers to the
average ground elevation. If it is with respect to the datum, then it is called flight altitude
HA , with HA = H + h.
mb =
HA
h
datum
Figure 4.4: Flight height, flight altitude and scale of aerial photograph.
The photograph scale varies from point to point. For example, the scale for point P
can easily be determined as the ratio of image distance CP to object distance CP by
mP
CP
CP
CP
CP
x2P + yP2 + c2
(XP XC )2 + (YP YC )2 + (ZP ZC )2
=
(4.2)
(4.3)
(4.4)
47
4.3.4
Relief displacement
The effect of relief does not only cause a change in the scale but can also be considered
as a component of image displacement. Fig. 4.5 illustrates this concept. Suppose point
T is on top of a building and point B at the bottom. On a map, both points have identical
X, Y coordinates; however, on the photograph they are imaged at different positions,
namely in T and B . The distance d between the two photo points is called relief
displacement because it is caused by the elevation difference h between T and B.
C
rB
PP
rT
T
h
B
The magnitude of relief displacement for a true vertical photograph can be determined by the following equation
d=
r h
r h
=
H
H h
(4.5)
dH
r
(4.6)
2 , and h the elevation difference of two
where r = x2T + yT2 , r = x2B + yB
points on a vertical. Eq. 4.5 can be used to determine the elevation h of a vertical
object
h=
48
The direction of relief displacement is radial with respect to the nadir point N ,
independent of camera tilt.
Chapter 5
Elements of Analytical
Photogrammetry
5.1
50
B
B
negative
latent image
image space
exit
pupil
entrance
diapositive
object space
A
Figure 5.1: In (a) the data acquisition process is depicted. In (b) we illustrate the
reconstruction process.
In this chapter we describe these procedures and the mathematical models, except
aerotriangulation (block adjustment) which will be treated later. For one and the same
procedure, several mathematical models may exist. They differ mainly in the degree
of complexity, that is, how closely they describe physical processes. For example, a
similarity transformation is a good approximation to describe the process of converting
measured coordinates to photo-coordinates. This simple model can be extended to
describe more closely the underlying measuring process. With a few exceptions, we
will not address the refinement of the mathematical model.
5.2
5.2.1
Coordinate Systems
Photo-Coordinate System
The photo-coordinate system serves as the reference for expressing spatial positions
and relations of the image space. It is a 3-D cartesian system with the origin at the
perspective center. Fig. 5.2 depicts a diapositive with fiducial marks that define the
fiducial center FC. During the calibration procedure, the offset between fiducial center
and principal point of autocollimation, PP, is determined, as well as the origin of the
radial distortion, PS. The x, y coordinate plane is parallel to the photograph and the
positive xaxis points toward the flight direction.
Positions in the image space are expressed by point vectors. For example, point
vector p defines the position of point P on the diapositive (see Fig. 5.2). Point vectors
of positions on the diapositive (or negative) are also called image vectors. We have for
point P
51
Table 5.1: Summary of the most important relationships between image and object
space.
relationship between
measuring system and
photo-coordinate system
photo-coordinate system and
object coordinate system
photo-coordinate systems
of a stereopair
model coordinate system and
object coordinate system
several photo-coordinate systems
and object coordinate system
several model coordinate systems
and object coordinate system
procedure
interior orientation
mathematical model
2-D transformation
exterior orientation
collinearity eq.
relative orientation
collinearity eq.
coplanarity condition
7-parameter
transformation
collinearity eq.
absolute orientation
bundle block
adjustment
independent model
block adjustment
7 parameter
transformation
z
y
PP
PS
FC
Fiducial Center
PP
Principal Point
PS
Point of Symmetry
image vector
FC
xp
p = yp
c
(5.1)
Note that for a diapositive the third component is negative. This changes to a positive
52
5.2.2
In order to keep the mathematical development of relating image and object space simple, both spaces use 3-D cartesian coordinate systems. Positions of control points in
object space are likely available in another coordinate systems, e.g. State Plane coordinates. It is important to convert any given coordinate system to a cartesian system before
photogrammetric procedures, such as orientations or aerotriangulation, are performed.
5.3
Interior Orientation
We have already introduced the term interior orientation in the discussion about camera
calibration (see GS601, Chapter 2), to define the metric characteristics of aerial cameras. Here we use the same term for a slightly different purpose. From Table 5.1 we
conclude that the purpose of interior orientation is to establish the relationship between
a measuring system1 and the photo-coordinate system. This is necessary because it is
not possible to measure photo-coordinates directly. One reason is that the origin of the
photo-coordinate system is only mathematically defined; since it is not visible it cannot
coincide with the origin of the measuring system.
Fig. 5.3 illustrates the case where the diapositive to be measured is inserted in
the measuring system whose coordinate axis are xm, ym. The task is to determine
the transformation parameters so that measured points can be transformed into photocoordinates.
5.3.1
Similarity Transformation
The most simple mathematical model for interior orientation is a similarity transformation with the four parameters: translation vector t, scale factor s, and rotation angle
.
xf = s(xm cos() ym sin()) xt
yf = s(xm sin() + ym cos()) yt
(5.2)
(5.3)
(5.4)
(5.5)
If we consider a11 , a12 , xt, yt as parameters, then above equations are linear in the
parameters. Consequently, they can be directly used as observation equations for a leastsquares adjustment. Two observation equations are formed for every point known in
1 Measuring
53
ym
yf
x
PP
yo
xf
FC
xo
xm
(5.6)
(5.7)
54
as indicated in Fig. 5.3(b). The skew angle expresses the nonperpendicularity. Also,
the scale is different between the the two axis.
We have
xf
= a11 xm + a12 ym xt
(5.8)
yf
= a21 xm + a22 ym yt
(5.9)
where
a11
a12
a21
sy (sin())
sx (sin( + ' cos())
Eq. 4.8 and 5.9 are also linear in the parameters. Like in the case of a similarity
transformation, these equations can be directly used as observation equations. With
four fiducial marks we obtain eight equations leaving a redundancy of two.
5.3.3
(drj dri ) rp
(rj ri )
(5.10)
dry
xp
drp
rp
yp
drp
rp
(5.11)
(5.12)
drp
)
rp
drp
= yp dry = yp (1
)
rp
= xp drx = xp (1
(5.13)
(5.14)
55
(5.15)
The coefficients pi are found by fitting the polynomial curve to the distortion values.
Eq. 5.15 is a linear observation equation. For every distortion value, an observation
equation is obtained.
y
PP
dr p
P
rp
x
yp
dry x p
drx
5.3.4
Fig. 5.5 shows how an oblique light ray is refracted by the atmosphere. According to
Snells law, a light ray is refracted at the interface of two different media. The density
differences in the atmosphere are in fact different media. The refraction causes the
image to be displayed outwardly, quite similar to a positive radial distortion.
The radial displacement caused by refraction can be computed by
r3
dref = K(r + 2 )
c
2410 H
2410 h2
106
K =
(5.16)
(5.17)
These equations are based on a model atmosphere defined by the US Air Force. The
flying height H and the ground elevation h must be in units of kilometers.
56
dref
negative
P
P
perspective center
H
P
h
datum
5.3.5
57
dearth
H - ZP
P
ZP
ZP
datum
P
Z
5.3.6
58
ym
yf
de
ar
th
dr
ef
dr
PP
yo
FC
xo
xf
xm
4. Correct the photo-coordinates for refraction, according to Eqs. 4.16 and 5.17.
This correction is negative. The displacement caused by refraction is a functional
relationship of dref = f (H, h, r, c). With a flying height H = 2, 000 m, elevation
above ground h = 500 m we obtain for a wide angle camera (c 0.15 m) a
correction of 4 m for r = 130 mm. An extreme example is a superwide angle
camera, H = 9, 000 m, h = 500 m, where dref = 34 m for the same point.
5. Correct for earth curvature only if the control points (elevations) are not in a
cartesian coordinate system or if a map is compiled. Using the extreme example
as above, we obtain dearth = 65 m. Since this correction has the opposite
sign of the refraction, the combined correction for refraction and earth curvature
would be dcomb = 31 m. The correction due to earth curvature is larger than
the correction for refraction.
5.4
59
Exterior Orientation
dearth
Exterior orientation is the relationship between image and object space. This is accomplished by determining the camera position in the object coordinate system. The
camera position is determined by the location of its perspective center and by its attitude,
expressed by three independent angles.
H - ZP
P
ZP
ZP
datum
P
Z
As depicted in Fig. 5.8, vector q is the difference between the two point vectors c
and p. For satisfying the collinearity condition, we rotate and scale q from object to
image space. We have
pi =
1
1
R q = R (p c)
(5.20)
60
cos cos
R = cos sin + sin sin cos
sin sin cos sin cos
cos sin
cos cos sin sin sin
sin cos + cos sin sin
sin
sin cos
cos cos
(5.21)
1
(XP XC )r21 + (YP YC )r22 + (ZP ZC )r23
y =
(5.23)
1
(XP XC )r31 + (YP YC )r32 + (ZP ZC )r33
c =
(5.24)
By dividing the first by the third and the second by the third equation, the scale
factor 1 is eliminated leading to the following two collinearity equations:
x =
(XP
(XP
(XP
= c
(XP
x = c
y
XC )r11 + (YP
XC )r31 + (YP
XC )r21 + (YP
XC )r31 + (YP
YC )r12 + (ZP
YC )r32 + (ZP
YC )r22 + (ZP
YC )r32 + (ZP
ZC )r13
ZC )r33
ZC )r23
ZC )r33
(5.25)
(5.26)
with:
x
pi = y
f
XP
p = YP
ZP
XC
c = YC
ZC
The six parameters: XC , YC , ZC , , , are the unknown elements of exterior orientation. The image coordinates x, y are normally known (measured) and the calibrated
focal length c is a constant. Every measured point leads to two equations, but also adds
three other unknowns, namely the coordinates of the object point (XP , YP , ZP ). Unless
the object points are known (control points), the problem cannot be solved with only
one photograph.
The collinearity model as presented here can be expanded to include parameters of
the interior orientation. The number of unknowns will be increased by three2 . This
combined approach lets us determine simultaneously the parameters of interior and
exterior orientation of the cameras.
There are only limited applications for single photographs. We briefly discuss the
computation of the exterior orientation parameters, also known as single photograph resection, and the computation of photo-coordinates with known orientation parameters.
Single photographs cannot be used for the main task of photogrammetry, the reconstruction of object space. Suppose we know the exterior orientation of a photograph.
Points in object space are not defined, unless we also know the scale factor 1/ for
every bundle ray.
2 Parameters of interior orientation: position of principal point and calibrated focal length. Additionally,
three parameters for radial distortion and three parameters for tangential distortion can be added.
5.4.1
61
The position and attitude of the camera with respect to the object coordinate system (exterior orientation of camera) can be determined with help of the collinearity equations.
Eqs. 5.26 and 4.27 express measured quantities3 as a function of the exterior orientation parameters. Thus, the collinearity equations can be directly used as observation
equations, as the following functional representation illustrates.
x, y = f (XC , YC , ZC , , , , XP , YP , ZP )
exterior orientation object point
(5.27)
For every measured point two equations are obtained. If three control points are
measured, a total of 6 equations is formed to solve for the 6 parameters of exterior
orientation.
The collinearity equations are not linear in the parameters. Therefore, Eqs. 4.25 and
5.26 must be linearized with respect to the parameters. This also requires approximate
values with which the iterative process will start.
5.4.2
5.5
5.5.1
Orientation of a Stereopair
Model Space, Model Coordinate System
62
z
y
C"
C
C"
x
zm
P"
ym
model space
P
xm
Figure 5.9: The concept of model space (a) and model coordinate system (b).
in the transformation of 3-D cartesian systems. The decision on how to introduce the
parameters depends on the application; one definition of the model coordinate system
may be more suitable for a specific purpose than another. In the following subsections,
different definitions will be discussed.
Now the orientation of a stereopair amounts to determining the exterior orientation
parameters of both photographs, with respect to the model coordinate system. From
single photo resection, we recall that the collinearity equations form a suitable mathematical model to express the exterior orientation. We have the following functional
relationship between observed photo-coordinates and orientation parameters:
x, y = f (XC , YC , ZC
, , , , XC , YC , ZC
, , , , X1 , Y1 , Z1 , , Xn , Yn , Zn )
mod. pt 1
ext. or
ext. or
mod. pt n
(5.28)
where f refers to Eqs. 4.25 and 5.26. Every point measured in one photo-coordinate
system renders two equations. The same point must also be measured in the second
photo-coordinate system. Thus, for one model point we obtain 4 equations, or 4 n
equations for n object points. On the other hand, n unknown model points lead to
3 n parameters, or to a total 12 + 3 n 7. These are the exterior orientation elements
of both photographs, minus the parameters we have eliminated by defining the model
coordinate system. By equating the number of equations with number of parameters
we obtain the minimum number of points, nmin , which we need to measure for solving
the orientation problem.
4 nmin = 12 7 + 3 nmin
nmin = 5
(5.29)
The collinearity equations which are implicitly referred to in Eq. 5.28 are non-linear.
By linearizing the functional form we obtain
x, y f 0 +
f
f
f
XC +
YC + +
ZC
XC
YC
ZC
(5.30)
63
with f 0 denoting the function with initial estimates for the parameters.
For a point Pi , i = 1, , n we obtain the following four generic observation
equations
rxi
ryi
rxi
ryi
f
XC
XC
f
XC
XC
f
XC
XC
f
XC
XC
f
YC
YC
f
+
YC
YC
f
+
YC
YC
f
+
YC
YC
+
f
ZC
ZC
f
+ +
ZC
ZC
f
+ +
ZC
ZC
f
+ +
ZC
ZC
+ +
+ f 0 xi
+ f 0 yi
+ f 0 xi
(5.31)
+ f 0 yi
As mentioned earlier, the definition of the model coordinate system reduces the
number of parameters by seven. Several techniques exist to consider this in the least
squares approach.
1. The simplest approach is to eliminate the parameters from the parameter list.
We will use this approach for discussing the dependent and independent relative
orientation.
2. The knowledge about the 7 parameters can be introduced in the mathematical
model as seven independent pseudo observations (e.g. XC = 0), or as condition
equations which are added to the normal equations. This second technique is more
flexible and it is particularly suited for computer implementation.
5.5.2
The definition of the model coordinate system in the case of a dependent relative orientation is depicted in Fig. 5.10. The position and the orientation is identical to one of the
two photo-coordinate systems, say the primed system. This step amounts to introducing the exterior orientation of the photo-coordinate system as known. That is, we can
eliminate it from the parameter list. Next, we define the scale of the model coordinate
system. This is accomplished by defining the distance between the two perspective
centers (base), or more precisely, by defining the X-component.
With this definition of the model coordinate system we are left with the following
functional model
x, y = f (ymc , zmc , , , , xm1 , ym1 , zm1 , , xmn , ymn , zmn )
model pt 1
ext. or
model pt n
(5.32)
With 5 points we obtain 20 observation equations. On the other hand, there are 5
exterior orientation parameters and 53 model coordinates. Usually more than 5 points
are measured. The redundancy is r = n 5. The typical case of relative orientation
64
ym
zm
y"
z"
bx
Parameters
C"
x"
bz
by
P"
xm
by
y base component
bz
z base component
Figure 5.10: Definition of the model coordinate system and orientation parameters in
the dependent relative orientation.
on a stereoplotter with the 6 von Gruber points leads only to a redundancy of one. It is
highly recommended to measure more, say 12 points, in which case we find r = 7.
With a non linear mathematical model we need be concerned with suitable approximations to ensure that the iterative least squares solution converges. In the case of the
dependent relative orientation we have
(5.33)
The initial estimates for the five exterior orientation parameters are set to zero for
aerial applications, because the orientation angles are smaller than five degrees, and
xmc >> ymc , xmc >> zmc = ym0c = zm0c = 0. Initial positions for the model
points can be estimated from the corresponding measured photo-coordinates. If the
scale of the model coordinate system approximates the scale of the photo-coordinate
system, we estimate initial model points by
xm0i
ym0i
zm0i
xi
yi
zi
(5.34)
The dependent relative orientation leaves one of the photographs unchanged; the
other one is oriented with respect to the unchanged system. This is of advantage for the
conjunction of successive photographs in a strip. In this fashion, all photographs of a
strip can be joined into the coordinate system of the first photograph.
5.5.3
65
ym
Fig. 5.11 illustrates the definition of the model coordinate system in the independent
relative orientation.
zm
z"
y"
C"
xm
bx
bz
by
P"
Parameters
x"
Figure 5.11: Definition of the model coordinate system and orientation parameters in
the independent relative orientation.
The origin is identical to one of the photo-coordinate systems, e.g. in Fig. 5.11 it
is the primed system. The orientation is chosen such that the positive xm-axis passes
through the perspective center of the other photo-coordinate system. This requires
determining two rotation angles in the primed photo-coordinate system. Moreover, it
eliminates the base components by, bz. The rotation about the x-axis () is set to zero.
This means that the ym-axis is in the x y plane of the photo-coordinate system. The
scale is chosen by defining xmc = bx.
With this definition of the model coordinate system we have eliminated the position
of both perspective centers and one rotation angle. The following functional model
applies
ext.or. ext.or.
model pt 1
model pt n
(5.35)
The number of equations, number of parameters and the redundancy are the same
as in the dependent relative orientation. Also, the same considerations regarding initial
estimates of parameters apply.
Note that the exterior orientation parameters of both types of relative orientation
are related. For example, the rotation angles , can be computed from the spatial
direction of the base in the dependent relative orientation.
66
5.5.4
zmc
)
bx
ymc
arctan(
)
2
(bx + zm2c )1/2
arctan(
(5.36)
(5.37)
Direct Orientation
In the direct orientation, the model coordinate system becomes identical with the ground
system, for example, a State Plane coordinate system (see Fig. 5.12). Since such systems
are already defined, we cannot introduce a priori information about exterior orientation
parameters like in both cases of relative orientation. Instead we use information about
some of the object points. Points with known coordinates are called control points. A
point with all three coordinates known is called full control point. If only X and Y is
known then we have a planimetric control point. Obviously, with an elevation control
point we know only the Z coordinate.
z"
y"
z
y
C"
x"
x
P"
XC ,YC ,ZC
, ,
X"C ,Y"
C ,Z"
C position of perspective center right
, ,
Parameters
Figure 5.12: Direct orientation of a stereopair with respect to a ground control coordinate
system.
The required information about 7 independent coordinates may come from different
arrangements of control points. For example, 2 full control points and an elevation, or
two planimetric control points and three elevations, will render the necessary information. The functional model describing the latter case is given below:
, , , , XC , YC , ZC
, , , , Z1 , Z2 , X3 , Y3 , X4 , Y4 , X5 , Y5
x, y = f (XC , YC , ZC
67
(5.39)
where pm = [xm, ym, zm]T is the point vector in the model coordinate system,
p = [X, Y, Z]T the vector in the ground control system pointing to the object point
P and t = [Xt , Yt , Zt ]T the translation vector between the origins of the 2 coordinate
systems. The rotation matrix R rotates vector pm into the ground control system and
s, the scale factor, scales it accordingly. The 7 parameters to be determined comprise
3 rotation angles of the orthogonal rotation matrix R, 3 translation parameters and one
scale factor.
68
zm
ym
xm
m
model
Figure 5.13: Absolute orientation entails the computation of the transformation parameters between model and ground coordinate system.
The following functional model applies:
x, y, z = f (Xt , Yt , Zt , , , ,
s )
translation orientation scale
(5.40)
In order to solve for the 7 parameters at least seven equations must be available. For
example, 2 full control points and one elevation control point would render a solution.
If more equations (that is, more control points) are available then the problem of determining the parameters can be cast as a least-squares adjustment. Here, the idea is to
minimize the discrepancies between the transformed and the available control points.
An observation equation for control point Pi in vector form can be written as
ri = sRpmi t pi
69
(5.41)
with r the residual vector [rx , ry , rz ]T . Obviously, the model is not linear in the
parameters. As usual, linearized observation equations are obtained by taking the partial
derivatives with respect to the parameters. The linearized component equations are
The approximations may be obtained by first performing a 2-D transformation with
x, y-coordinates only.
70
Chapter 6
Measuring Systems
Most analytical photogrammetric procedures require photo coordinates as measured
quantities. This, in turn, requires accurate, reliable and efficient devices for measuring
points on stereo images. The accuracy depends on the application. Typical accuracies
range between three and ten micrometers. Consequently, the measuring devices must
meet an absolute, repeatable accuracy of a few micrometers over the entire range of the
photographs, that is over an area of 230 mm 230 mm.
In this chapter we discuss the basic functionality and working principles of analytical
plotters and digital photogrammetric workstations.
Background
The analytical plotter was invented in 1957 by Helava. The innovative concept was met
with reservation because computers at that time were not readily available, expensive,
and not very reliable. It took nearly 20 years before the major manufacturers of photogrammetric equipment embarked on the idea and began to develop analytical plotters.
At the occasion of the ISPRS congress in 1976, analytical plotters were displayed for
the first time to photogrammetrists from all over the world. Fig.6.1 shows a typical
analytical plotter.
Slowly, analytical plotters were bought to replace analog stereoplotters. By 1980,
approximately 5,500 stereoplotters were in use worldwide, but only a few hundred
analytical plotters. Today, this number increased to approximately 1,500. Leica and
Zeiss are the main manufacturers with a variety of systems. However, production of
instruments has stopped in the early 1990s.
6.1.2
System Overview
Fig. 6.2 depicts the basic components of an analytical plotter. These components comprise the stereo viewer, the user interface, electronics and real-time processor, and host
computer.
72
6 Measuring Systems
73
user
interface
host
real-time
computer
processor
viewer
Figure 6.3: Stereo viewer of the Planicomp P-3 analytical plotter from Zeiss.
reduce friction and wear and tear. An interesting solution are air bearings. The air is
pumped through small orifices located on the facing side of one of two flat surfaces.
This results in a thin uniform layer of air separating the two surfaces, providing smooth
motion.
The force to produce motion is most often produced by threaded spindles or precision
lead screws. Coarse positioning is most conveniently accomplished by a free moving
cursor. After clamping the stages, a pair of handwheels allows for precise positioning.
Measuring and Recording System
If the translation system uses precision lead screws then the measuring is readily accomplished by counting the number of rotations of the screw. For example, a single
rotation would produce a relative translation equal to the pitch of the screw. If the pitch
is uniform, a fractional part of the rotation can be related to a fractional part of the
74
6 Measuring Systems
pitch. Full revolutions are counted on a coarse scale while the fractional part is usually
interpreted on a separate, more accurate scale.
To record the measurements automatically, an analog to digital (A/D) conversion
is necessary because the x-y-readings are analog in nature. Today, A/D converters are
based on solid state electronics. They are very reliable, accurate and inexpensive.
diapositive
carrier stage
sensor
lig
ht
scale
Fig. 6.5 illustrates one of several concepts for the A/D conversion process, using
linear encoders. The grating of the glass scales is 40 m. Light from the source L
transmits through the glass scale and is reflected at the lower surface of the plate carrier.
A photo diode senses the reflected light by converting it into a current that can be
measured. Depending on the relative position of plate carrier and scale, more or less
light is reflected. As can be seen from Fig. 6.5 there are two extreme positions where
either no light or all light is reflected. Between these two extreme positions the amount
of reflected light depends linearly on the movement of the plate carrier. Thus, the precise
position is found by linear interpolation.
User Interface
With user interface we refer to the communication devices an operator has available
to work on an analytical plotter. These devices can be associated to the following
75
functional groups:
viewer control buttons permit to change magnification, illumination and image rotation.
pointing devices are necessary to drive the measuring mark to specific locations,e.g.
fiducial marks, control points or features to be digitized. Pointing devices include
handwheels, footdisk, mouse, trackball, cursor. A typical configuration consists
of a special cursor with an additional button to simulate z-movement (see Fig. 6.6).
Handwheels and footdisk are usually offered as an option to provide the familiar
environment of a stereoplotter.
digitizing devices are used to record the measuring mark together with addtional information such as identifiers, graphical attributes, feature codes. For obvious
reasons, digitizing devices are usually in close proximity to pointing devices. For
example, the cursor is often equipped with additional recording buttons. Digitizing devices may also come in the form of foot pedals, a typical solution found with
stereoplotters. A popular digitizing device is the digitizing tablet that is mainly
used to enter graphical information. Another solution is the function keyboard.
It provides less flexibility, however.
host computer communication involves graphical user interface and keyboard.
Electronics and Real-Time Processor
The electronic cabinet and the real-time processor are the interface between the host
computer and the stereo viewer. The user does not directly communicate with this
sub-system.
The motors that drive the stages receive analog signals, for example voltage. However on the host computer only digital signals are available. Thus, the main function of
the electronics is to accomplish A/D and D/A conversion.
76
6 Measuring Systems
stage coordinates from model coordinates in real-time. This involves executing the
collinearity equations and inverse interior orientation at a rate of 50 to 100 times per
second.
Host Computer
The separation of real-time computations from more general computational tasks makes
the analytical plotter a device independent peripheral with which the host communicates
via standard interface and communication. The task of the host computer is to assist
the operator in performing photogrammetric procedures such as the orientation of a
stereomodel and its digitization.
The rapid performance increase of personal computers (PC) and their relatively low
price makes them the natural choice for the host computer. Other hosts typically used
are UNIX workstations.
Auxiliary Devices
Depending on the type of instruments, auxiliary devices may be optionally available to
increase the functionality. On such device is the superpositioning system. Here, the
current digitizing status is displayed on a small, high resolution monitor. The display is
interjected into the optical path so that the operator sees the digitized map superimposed
on the stereomodel.This is very helpful for quickly checking the completeness and the
correctness of graphical information.
6.1.3
Basic Functionality
Analytical plotters work in two modes: stereocomparator mode and model mode. We
first discuss the model mode because that is the standard operational mode.
Model Mode
Suppose we have set up a model. That is, the diapositives of a stereopair are placed on
the stages and are oriented. The task is now to move the measuring mark to locations
of interest, for example to features we need to digitize. How do the stages move to the
conjugate location?
The measuring mark, together with the binoculars, remain fixed. As a consequence,
the stages must move to go from one point to another. New positions are indicated by
the pointing devices, for example by moving the cursor in the direction of the new point.
The cursor position is constantly read by the real-time processor. The analog signal is
converted to a 3-D location. One can think of moving the cursor in the 3-D model
space. The 3-D model position is immediately converted to stage coordinates. This
is accomplished by first computing photo-coordinates with the collinearity equations,
followed by computing stage coordinates with the inverse interior orientation. We have
symbolically
X, Y, Z
x , y
=
=
x , y
xm , ym
xm , ym
=
=
=
77
f (ext.or , X, Y, Z, c )
f (int.or , x , y )
f (int.or , x , y )
These equations symbolize the classical real-time loop of analytical plotters. The
real-time processor is constantly reading the user interface. Changes in the pointing
devices are converted to model coordinates X, Y, Z which, in turn, are transformed
to stage coordinates xm, ym that are then submitted to the stage motors. This loop
is repeated at least 50 times per second to provide smooth motion. It is important to
realize that the pointing devices do not directly move the stages. Alternatively, model
coordinates can also be provided by the host computer.
Comparator Mode
Clearly, the model mode requires the parameters of both, exterior and interior orientation. These parameters are only known after successful interior and relative orientation.
Prior to this situation, the analytical plotter operates in the comparator mode. The same
principle as explained above applies. The real-time processor still reads the position
of the pointing devices. Instead of using the orientation parameters, approximations
are used. For example, the 5 parameters of relative orientation are set to zero, and the
same assumptions are made as discussed in Chapter 2, relative orientation. Since only
rough estimates for the orientation parameters are used, conjugate locations are only
approximate. The precise determination of conjugate points is obtained by clearing
the parallaxes, exactly in the same way as with stereocomparators. Again, the pointing
devices do not drive the stages directly.
78
6 Measuring Systems
79
Upon acceptance, the interior orientation parameters are downloaded to the real-time
processor.
Relative Orientation
The relative orientation requires first a successful interior orientation. Prior to the measuring phase, certain parameters must be defined, for example the number of parallax
points and the type of orientation (e.g. independent or dependent relative orientation).
The analytical plotter is still in comparator mode. The stages are now directed to approximate locations of conjugate points, which are regularly distributed accross the model.
The approximate positions are computed according to the consideration discussed in the
previous section. Now, the operator selects a suitable point for clearing the parallaxes.
This is accomplished by locking one stage and moving the other one only until a the
point is parallax free.
After six points are measured, the parameters of relative orientation are computed
and results are displayed. If the computation is successful, the parameters are downloaded to the RT processor and a model is established. At that time, the analytical
plotter switches to the model mode. Now, the operator moves in an oriented model.
To measure additional points, the system changes automatically to comparator mode to
force the operator to clear the parallaxes.
It is good practice to include the control points in the measurements and computations of the relative orientation. Also, it is advisable to measure twelve or more points.
Absolute Orientation
The absolute orientation requires a successful interior and relative orientation. In case
the control points are measured during the relative orientation, the system immediately
computes the absolute orientation. As soon as the minimum control information is
measured, the system computes approximate locations for additional control points and
positions the stages accordingly.
6.2
Probably the single most significant product of digital photogrammetry is the digital
photogrammetric workstation (DPW), also called a softcopy workstation. The role of
DPWs in digital photogrammetry is equivalent to that of analytical plotters in analytical
photogrammetry.
80
6 Measuring Systems
Feature
accuracy
instrument
image refinement
drive to
FM, control points
profiles
DEM grid
photography
projection system
size
orientations
computer assistance
time
storing parameters
range of or. parameters
map compilation
CAD systems
time
Analytical
Plotter
Computer-assisted
Stereoplotter
Conventional
Stereoplotter
2 m
yes
10m
no
10m
no
yes
yes
yes
no
yes
no
no
yes
no
any
18 9 in.
only central
9 9 in.
only central
9 9 in.
high
10 minutes
yes
unlimited
medium
30 minutes
yes
, 5o
none
1 hour
no
, 5o
many
20 %
few
30 %
none
100 %
81
The development of DPWs is greatly influenced by computer technology. Considering the dynamic nature of this field, it is not surprising that digital photogrammetric
workstations undergo constant changes, particularly in terms of performance, comfort
level, components, costs, and vendors. It would be nearly impossible to provide a comprehensive list of the current products, which are commercially available much less
describe them in some detail. Rather, the common aspects, such as architecture and
functionality is emphasized.
The next section provides some background information, including a few historical
remarks and an attempt to classify the systems. This is followed by a description of the
basic system architecture and functionality. Finally, the most important applications
are briefly discussed.
To build on common ground, I frequently compare the performance and functionality
of DPWs with that of analytical plotters. Sec. 6.3 summarizes the advantages and the
shortfalls of DPWs relative to analytical plotters.
6.2.1
Background
Great strides have been made in digital photogrammetry during the past few years due
to the availability of new hardware and software, such as powerful image processing
workstations and vastly increased storage capacity. Research and development efforts
resulted in operational products that are increasingly being used by government organizations and private companies to solve practical photogrammetric problems. We are
witnessing the transition from conventional to digital photogrammetry. DPWs play a
key role in this transition.
Digital Photogrammetric Workstation and Digital Photogrammetry Environment
Fig. 6.7 depicts a schematic diagram of a digital photogrammetry environment. On the
input side we have a digital camera or a scanner with which existing aerial photographs
are digitized. At the heart of the processing side is the DPW. The output side may
comprise a filmrecorder to produce hardcopies in raster format and a plotter for providing
hardcopies in vector format. Some authors include the scanner and filmrecorder as
components of the softcopy workstation. The view presented here is that a DPW is a
separate, unique part of a digital photogrammetric system.
As discussed in the previous chapters, digital images are obtained directly by using
electronic cameras, or indirectly by scanning existing photographs. The accuracy of
digital photogrammetry products depends largely on the accuracy of electronic cameras
or on scanners, and on the algorithms used. In contrast to analytical plotters (and even
more so to analog stereoplotters), the hardware of DPWs has no noticeable effect on
the accuracy.
Figs. 6.9 and 6.8 show typical digital photogrammetric workstations. At first sight
they look much like ordinary graphics workstations. The major differences are the
stereo display, 3-D measuring system, and increased storage capacity to hold all digital
images of an entire project. Sec. 6.2.2 elaborates further on these aspects.
The station shown in Fig. 6.8 features two separate monitors. In this fashion, the
stereo monitor is entirely dedicated to display imagery only. Additional information,
82
6 Measuring Systems
photograph
scanner
digital camera
digital image
display
computer
storage
Digital Photogrammetric
Workstation (DPW)
user interf.
film recorder
plotter
orthophoto
map
such as the graphical user interface, is displayed on the second monitor. As an option
to the 3-D pointing device (trackball), the system can be equipped with handwheels to
more closely simulate the operation on a classical instrument.
The main characteristic of Intergraphs ImageStation Z is the 28-inch panoramic
monitor that provides a large field of view for stereo display (see Fig. 6.9, label 1).
Liquid crystal glasses (label 3) ensure high-quality stereo viewing. The infrared emitter
on top of the monitor (label 4) provides synchronization of the glasses and allows group
viewing. The 3-D pointing device (label 6) allows freehand digitizing and the 10 buttons
facilitate easy menu selection.
6.2.2
Fig. 6.10 depicts the basic system components of a digital photogrammetric workstation.
CPU the central processing unit should be reasonably fast considering the amount
of computations to be performed. Many processes lend themselves to parallel processing. Parallel processing machines are available at reasonable prices.
However, programming that takes advantage of them is still a rare commodity
and prevents a more wide spread use of the workstations.
83
OS the operating system should be 32 bit based and suitable for real-time processing.
UNIX satisfies these needs; in fact, UNIX based workstations were the systems
of choice for DPWs until the emergence of Windows 95 and NT that make PCs
a serious competitor of UNIX based workstations.
main memory due to the large amount of data to be processed, sufficient memory
should be available. Typical DPW configurations have 64 MB, or more, of RAM.
storage system must accommodate the efficient storage of several images. It usually
consists of a fast access storage device, e.g. hard disks, and mass storage media
with slower access times. Sec. 6.2.3 discusses the storage system in more detail.
graphic system the graphics display system is another crucial component of the DPW.
The purpose of the display processor is to fetch data, such as raster (images)
or vector data (GIS), process and store it in the display memory and update the
monitor. The display system also handles the mouse input and the cursor.
3-D viewing system is a distinct component of a DPWs usually not found in other
workstations. It should allow viewing a photogrammetric model comfortably
and possibly in color. For a human operator to see stereoscopically, the left
and right image must be separated. Sec. 6.2.3 discusses the principles of stereo
viewing.
3-D measuring device is used for stereo measurements by the operator. The solution
may range from a combination of a 2-D mouse and trackball to an elaborate device
with several programmable function buttons.
84
6 Measuring Systems
Figure 6.9: Digital photogrammetric workstation. Shown is Intergraphs ImageStation Z. Main characteristic is the large stereo display of
the 28-inch panoramic monitor. Courtesy Intergraph Corporation,
Huntsville, AL.
network a modern DPW hardly works in isolation. It is often connected to the scanning
system and to other workstations, such as a geographic information system. The
client/server concept provides an adequate solution in this scenario of multiple
workstations and shared resources (e.g. printers, plotters).
user interface may consist of hardware components such as keyboard, mouse, and
auxiliary devices like handwheels and footwheels (to emulate an analytical plotter
environment). A crucial component is the graphical user interface (GUI).
6.2.3
The basic system functionality can be divided into the following categories
1. Archiving: store and access images, including image compression and decompression.
85
3-D
viewing
CPU/OS
graphic
3-D
measuring
memory
network
storage
periphery
printer
plotter
86
6 Measuring Systems
87
captionwidth7cm
Table 6.2: Magnification and size of field of view of analytical plotters.
magnification
5
6
10
15
20
40
21
14
10
10
32
magnification the smaller the field of view. Table 6.2 lists zoom values and the size of
the corresponding film area that appears in the oculars. Feature extraction (compilation)
is usually performed with a magnification of 8 to 10 times. With higher magnification,
the graininess of the film reduces the quality of stereoviewing. It is also worth pointing
out that stereoscopic viewing requires a minimum field of view.
Let us now compare the viewing capabilities of analytical plotters with that of DPWs.
First, we realize that this function is performed by the graphics subsystem, that is, by the
monitor(s). To continue with the previous example of a film with 70 lp/mm resolution,
viewed 10 magnified, we read from Table 6.2 that the corresponding area on the film
has a diameter of 20 mm. To preserve the high film resolution it ought to be digitized
with a pixelsize of approximately 6 m (1000/(2 70)). It follows that the monitor
should display more than 3K 3K pixels. Monitors with this sort of resolution do
not exist or are prohibitively expensive, particularly when considering color imagery
and true color rendition (24+ bit planes).
If we relax the high resolution requirements and assume that images are digitized
with a pixelsize of 15 m, then a monitor with the popular resolution of 1280 1024
would display an area that is quite comparable to that of analytical plotters.
Magnification, known under the more popular terms zooming in/out, is achieved by
changing the ratio of number of image pixels displayed to the number of monitor pixels.
To zoom in, more monitor pixels are used than image pixels. As a consequence, the
size of the image viewed decreases and stereoscopic viewing may be affected.
The analogy to the floating point mark of analytical plotters is the three dimensional
cursor that is created by using a pattern of pixels, such as a cross or a circle. The cursor
must be generated by bitplane(s) that are not used for displaying the image. The cursor
moves in increments of pixels, which may appear jerky compared to the smooth motion
of analytical plotters. One advantage of cursors, however, is that they can be represented
in any desirable shape and color.
The accuracy of interactive measurements depends on how well you can identify
a feature, on the resolution, and on the cursor size. Ultimately, the pixelsize sets the
lower limit. Assuming that the maximum error is 2 pixels, the standard deviation is
approximately 0.5 pixel. A better sub-pixel accuracy can be obtained in two ways. A
88
6 Measuring Systems
straight-forward solution is to use more monitor pixels than image pixels. Fig. 6.11(a)
exemplifies the situation. Suppose we use 3 3 monitor pixels to display one image
pixel. The standard deviation of a measurement is now 0.15 image pixels3 . As pointed
out earlier, using more monitor pixels for displaying an image pixel reduces the size of
the field of view. In the example above, only an area of 6 mm would be seenhardly
enough to support stereopsis.
monitor pixel
image pixel
cursor
(a)
(b)
Figure 6.11: Two solutions to sub-pixel accuracy measurements. In (a), an image pixel is displayed to m monitor pixels, m > 1. The cursor moves in increments of monitor
pixels, corresponding to 1/m image pixels. In (b) the image is moved under the fixed
cursor position in increments smaller than an image pixel. This requires resampling
the image at sub-pixel locations.
89
separation
implementation
spatial
2 monitors + stereoscope
1 monitor + stereoscope (split screen)
2 monitors + polarization
spectral
anaglyphic
polarization
temporal
90
6 Measuring Systems
by an infrared emitter, usually mounted on top of the monitor (Fig. 6.9 on page 84 shows
an example). Understandably, the goggles are heavier and more expensive compared
to the simple polarizing glasses of the first solution. On the other hand, the polarizing
screen and the monitor are a tightly coupled unit, offering less flexibility in the selection
of monitors.
Roaming
Roaming refers to moving the 3-D pointing device. This can be accomplished in two
ways. In the simpler solution, the cursor moves on the screen according to the movements of the pointing device (e.g. mouse) by the operator. The preferred solution,
however, is to keep the cursor locked in the screen center, which requires redisplaying
the images. This is similar to the operation of analytical plotters where the floating
point mark is always in the center of the field of view.
The following discussion refers to the second solution. Suppose we have a stereo
DPW with a 1280 1024 resolution, true color monitor, and imagery digitized to 15 m
pixelsize (or approximately 16K 16K pixels). Let us now freely roam within a stereomodel, much as we would do it on an analytical plotter and analyze the consequences
in terms of transfer rates and memory size.
Fig. 6.14 schematically depicts the storage and graphic systems. The essential
components of the graphic system include the graphics processor, the display memory,
(a)
91
(b)
display memory
monitor
polarization screen
polarizing glasses
synchronized eyewear
Figure 6.13: Schematic diagram of the temporal separation of the left and right
image of a stereopair for stereoscopic viewing. In (a), a polarizing screen is mounted in front of the display. Another solution
is sketched in (b). The screen is viewed through eyewear with
alternating shutters. See text for detailed explanations.
the digital-to-analog converter (DAC), and the display device (CRT monitor in our case).
The display memory contains the portion of the image that is displayed on the monitor.
Usually, the display memory is larger than the screen resolution to allow roaming in
real-time. As soon as we roam out of the display memory, new image data must be
fetched from disk and transmitted to the graphics system.
Graphic systems come in the form of high-performance graphics boards, such as
RealiZm or Vitec boards. These state-of-the-art graphics systems are as complex as the
system CPU. The interaction of the graphics system with the entire DPW, e.g. requesting
new image data, is a critical measure of system performance.
Factors such as storage organization, bandwidths, and additional processing cause
delays in the stereo display. Let us further reflect on these issues.
With an image compression rate of three, approximately 240 MB are required to store
one color image. Consequently, a 24 GB mass storage system could store 100 images
on-line. By the same token, a hard disk with 2.4 GB capacity could hold 10 compressed
color images.
Since we request true color display, approximately 24 MB are required to hold the
92
6 Measuring Systems
mass
storage
graphics system
system bus
storage system
clock
hard
disk
system
memory
interface
display
memory
LUT
graphics
processor
program
memory
DAC
display
device
Figure 6.14: Schematic diagram of storage system, graphic system and display.
two images of the stereomodel6 . As discussed in the previous section, the left and right
image must be displayed alternately at a frequency of 120 Hz to obtain an acceptable
model7 . The bandwidth of the display memory amounts to 128010243120 = 472
MB/sec. Only high speed, dual port memory, such as VRAM (video RAM) satisfies
such high transfer rates. For less demanding operations, such as storing programs or
fonts, less expensive memory is used in high performance graphic workstations.
At what rate should one be able to roam? Skilled operators can trace contour lines
at a speed of 20 mm/sec. A reasonable request is that the display on the monitor should
be crossed within 2 seconds, in any direction. This translates to 1280 0.015/2
10 mm/sec in our example. Some state a maximum roam rate of 200 pixels/sec on
Intergraphs ImageStation Z softcopy workstation. As soon as we begin to move the
pointing device, new portions of the model must be displayed. To avoid immediate disk
transfer, the display memory is larger than the monitor, usually four times. Thus, we
can roam without problems within a distance twice as long as the screen window at the
cost of increased display memory size (32 MB of VRAM in our example).
Suppose we move the cursor with a speed of 10 mm/sec toward one edge. When
will we hit the edge of the display memory? Assuming we begin at the center, after one
second the edge is reached and the display memory must be updated with new data. To
assure continuous roaming, at least within one stereomodel, the display memory must
be updated before the screen window reaches the limit. The new position of the window
is predicted by analyzing the roaming trajectory. A look-ahead algorithm determines
the most likely positions and triggers the loading of image data through the hierarchy
6 1280
93
predicted trajectory
system memory
display memory
monitor
Figure 6.15: Schematic diagram of the different windows related to the size of an
image. Real-time roaming is possible within the display memory.
System memory holds a larger portion of the image. The location is
predicted by analyzing the trajectory of recent cursor movements.
8 Fast wide SCSI-2 devices, available as options, sustain transfer rates of 20 MB/sec. This would be
sufficient for roaming within a b/w stereo model.
94
6 Measuring Systems
95