Function Approximation for Free-Form Surfaces
Mohamed N. Ahmed, Sameh M. Yamany, and Aly A. Farag
Computer Vision and Image Processing Laboratory
University of Louisville, Department of Electrical Engineering
Louisville, KY 40292
E-mail:ffarag,mohamed,yamanyg@cairo.spd.louisville.edu,
Phone:(502)-852-7510, Fax:(502)852-1580.
Abstract
This paper presents a survey of techniques used for function approximation and free form surface
reconstruction. A comparative study is performed between classical interpolation methods and two
methods based on neural networks. We show that the neural networks approach provides good approximation and provides better results than classical techniques when used for reconstructing smoothly
varying surfaces.
1
Introduction
The interpolation problem, in its strict sense, may be stated as follows [1]: Given a set of N di erent points:
fxi 2 <m j i = 1; 2; ::::; N g and a corresponding set of N real numbers: fdi 2 <n j i = 1; 2; ::::; N g
nd a function F:
F : <m
! <n j F (xi )
= di ; i = 1; 2; :::; N ;
(1)
where m and n are integers. The interpolation surface is constrained to pass through all the data points.
The interpolation function can take the form:
F (x) =
XN wi (x; xi) ;
i=1
1
(2)
where (x; x ) i = 1; 2; :::; N is a set of N arbitrary functions known as the radial basis functions.
Inserting the interpolation conditions, we obtain the following set of simultaneous linear equations for the
unknown coecients (weights) of the expansion w :
f
i
j
g
i
0
BB
BB
BB
BB
BB
@
11
12
1N
21
22
2N
N 1 N 2
N N
0
1B
CC BBB
CC BB
CC BB
CC BB
CC BB
A BB
@
1 0
CC BB
CC BB
CC BB
CC = BB
CC BB
CC BB
CC BB
A @
w1
w2
w3
wN
d1
d2
d3
dN
1
CC
CC
CC
CC
CC
CC
CC
A
(3)
where
ji = ( x; xi ); j; i = 1; 2; ::N :
d W de ned as
(4)
Let the N 1 vectors and
d = [d ; d ; :::; d
1
2
W = [w ; w ; :::; w
1
]
(5)
t
N
2
N
]
(6)
t
represent the desired response vector and the linear weight vector, respectively. Let
= ; i; j
f
denote the N
2
i;j
[1; N ]
g
(7)
N interpolation matrix. Hence, equation (3) can be rewritten as
W = d:
(8)
Provided that the data points are all distinct, the interpolation matrix is positive de nite [1]. Therefore,
the weight vector W can be obtained by
W = ? d ;
1
(9)
where ?1 is the inverse of the interpolation matrix . Theoretically speaking, a solution to the system in
(9) always exits. Practically, however, the matrix can be singular. In such cases, regularization theory
2
can be used; where the matrix is perturbed to + I to assure positive de niteness [1].
Based on the interpolation matrix di erent interpolation techniques are available [2]-[5]. Some of these
techniques will be reviewed in next section.
2
2.1
Classical Interpolation Techniques
Shepard's Interpolation
Shepard formulated an explicit function for interpolating scattered data [16]. The value of the modeling
function is calculated as a weighted average of data values according to their distance from the point at
which the function is to be evaluated. Shepard's expression for globally modeling a surface is
8P
>< P
S (x; y ) =
>: zi
Zi =ri2
1=ri2
i=1
n
i=1
n
ri 6= 0
ri = 0:
(10)
where ri is the standard distance metric:
ri = [(x ? xi )2 + (y ? yi )2 ]
=
1 2
:
(11)
The above algorithm is global; local Shepard methods can be formed by evaluating S(x,y) from the weighted
data values within a disk of radius R from (x; y). A function (r) is de ned that ensures the local behavior
of the interpolating method by calculating a surface model for any r <= R, and which also weights the
points at r <= R=3 more heavily, i.e.,
8
>> 1=r
<
(r) = > 27(r=R ? 1) =4R
>:
2
0
0 < r < R=3
R=3 < r R
(12)
R < r:
Therefore, for points that are within R-distance of (x; y), the surface is given by
8P
>< P
S (x; y ) =
>: zi
zi (ri )2
(ri )2
i=1
n
i=1
n
ri 6= 0
ri = 0:
(13)
A usable number of points must fall within the local region of radius R for this method to be applicable.
Shepard's method is simple to implement and can be localized, which is advantageous for the large cloud
data sets [3].
3
2.2
Thin Plate Splines
The method of thin plate splines proposed by Shepard [2] and the multiquadric method proposed by Hardy
[5] can both be classi ed as interpolating functions composed of a sum of radial basis functions. The
basis functions are radially symmetric about the points at which the interpolating function is evaluated.
Conceptually, the method is simple to understand in terms of a thin, deformable plate passing through the
data points collected o the surface of the object. The thin plate spline radial basis functions are obtained
from the solution of minimizing the energy of the thin plate constrained to pass through loads positioned at
the cloud data set. The modeling surface is constructed from the radial basis functions (x; y) by expanding
them in a series of (n + 3) terms with c coecients:
i
i
S (x; y )
=
X
n
ci
i=1
i
(x; y);
(14)
where the basis functions are given by
i
(x; y) = r2 ln(r ):
(15)
i
i
The modeling surface function S (x; y) has the form derived in Harder and Desmairis [11]
S (x; y )
= a 0 + a1 x + a 2 y +
X
n
i=1
ci ri2 ln(ri ):
(16)
The coecients are determined by substituting the discrete data set into and solving the resulting set of
linear equations:
X
n
i
X
n
X
n
i
= 0;
(17)
xi ci
= 0;
(18)
yi ci
= 0;
(19)
= 1
i
f (xi ; yi )
ci
= 1
= 1
= a0 + a1 x + a 2 y
X
+
n
i
4
= 1
ci ri2 ln(ri ):
(20)
2.3
Hardy's Multiquadric Interpolation
Hardy [5] proposed a method for interpolating scattered data that employs the summation of equations
of quadratic surfaces that have unknown coecients. The multiquadric basis functions are the hyperbolic
quadrics
i = (ri2 + b2 )
1=2
(21)
;
where b is a constant and ri is the standard Euclidean distance metric. The summation of a series of
hyperbolic quadrics have been found to perform best compared to the other members of the multiquadrics
family. The modeling surface is given by
S (x; y ) =
X c = X c [(x ? x ) + (y ? y ) + b ]
n
n
i=1
i
i
i=1
i
i
2
2 1=2
:
(22)
; j = 1; :::; n:
(23)
i
2
The cloud data set is substituted into (22), giving set of linear equations
zi =
X c [(x ? x ) + (y ? y ) + b ]
n
i=1
i
j
i
2
j
i
2
2 1=2
3 Neural Network as Universal Approximator
Although classi cation is a very important form of neural computation, neural networks could be used to
nd an approximation of a multivariable function F (x) [6]. This could be approached through a supervised
training of an input-output mapping from a data set. The learning proceeds as a sequence of iterative weight
adjustments until a weight vector is found that satis es certain criterion.
In a more formal approach, multilayer networks can be used to map <n into < by using P examples of the
function F (x) to be approximated by performing nonlinear mapping with continuous neurons in the rst
layer then computing the linear combination by the single node of the output layer as follows:
y = ?[ VX ]:
O =
Wty:
5
(24)
(25)
where V and W are the weight matrices for hidden and output layer respectively, and ?[] is a diagonal
operator matrix consisting of nonlinear squashing functions ()
? =
1
0
BB () 0 0 0 CC
BB 0 () 0 0 CC
C
BB
B@ CCA
0
(26)
()
A function () : < ! [0; 1] is a squashing function if 1) it is nondecreasing, 2) lim!1 () = 1, 3)
lim!?1 () = 0. Here we have used a bipolar squashing function of the form
(x) = 1 + 2e?x ? 1:
(27)
The studies of Funanashi [8], Hornik, Stinchcombe and White [7] prove that multilayer feedforward networks
perform as a class of universal approximators. Although the concept of nonlinear mapping, followed by
linear mapping, pervasively demonstrates the approximating potential of neural networks, the majority of
the reported studies have dealt with second layer also providing the nonlinear mapping [6]-[7]. The general
network architecture performing the nested nonlinear scheme consists of a single hidden layer and a single
output O such that
O = ?(W?[VX]):
(28)
This standard class of neural networks architecture can approximate virtually any multivariable function of
interest provided that suciently many hidden neurons are available.
3.1 Approximation Using Multilayer Networks
A two-layer network was used for surface approximation. The x- and y- coordinates of the data points were
the input to the network, while the function value F (x; y) was the desired response d.
The learning algorithm applied was the error back-propagation learning technique. This technique calculates
an error signal at the output layer and uses this signal to adjust network weights in the direction of the
negative gradient descent of the network error E so that, for a network with I neurons in the input layer, J
neurons in the hidden layer,and K neurons the output layer, the weight adjustment is as follows:
6
where
@E ; k = 1; 2; :::; K j = 1; 2; ::::J:
wkj = ? @w
kj
(29)
@E ; j = 1; 2; :::; J i = 1; 2; ::::I:
vji = ? @v
ji
(30)
XK
(dk ? Ok )2 :
E = 21
k=1
(31)
The size J of the hidden layer is one of the most important considerations when solving actual problems using
multilayer feedforward networks. The problem of the size choice is under intensive study with no conclusive
answers available thus far for many tasks. The exact analysis of the issue is rather dicult because of the
complexity of the network mapping and due to the nondeterministic nature of many successfully completed
training procedures [6]. Here, we tested the network using di erent number of hidden neurons. The degree
of accuracy re ected by mean square error was chosen to be 0.05. Results are provided later in the paper.
3.2 Approximation using Functional Link Networks
Instead of carrying out a multi-stage transformation, as in multilayer networks, input/output mapping can
also be achieved through an arti cially augmented single layer network. The concept of training an augmented and expanded network leads to the so-called functional link network as introduced by Pao (1989) [10].
Functional link networks are single-layer neural network that are able to handle linearly non-separable tasks
due to the appropriately enhanced representation. This enhanced representation is obtained by augmenting
the input by higher order terms, which are generally nonlinear functions of the input.
The functional link network was used to approximate the surfaces by enhancing the two- component input
pattern (x; y) by twenty six orthogonal components such as xy, sin(nx), cos(ny); etc: for n = 1; 2; ; m.
The output of the network can be expressed as follows:
F (x; y) = xwx + ywy + xywxy
Xm sin(ix)wxi + cos(ix)wxi
i
m
X
sin(iy)wyi + cos(iy)wyi :
+
+
=1
i=1
(32)
The basic mathematical theory indicates that the functional expansion model should converge to a at-net
solution if large enough number of additional independent terms are used.
7
4 Hypersurface Reconstruction as an ill-Posed Problem
The strict interpolation procedure as described here may not be a good strategy for the training of function
approximators for certain classes because of poor generalization to new data for the following reason. When
the number of data points in the training set is much larger than the number of degrees of freedom of the
underlying physical process, and we are constrained to have as many basis functions as data points, the
problem is over-determined. Consequently, the algorithm may end up tting misleading variations due to
noise in the input data, thereby resulting in a degraded generalization performance [1].
The approximation problem belongs to a generic class of problems called inverse problems. An inverse
problem may be well-posed or ill-posed. The term well-posed has been used in applied mathematics since
the time of Hadamard in the early 1900s. To explain what we mean by this terminology, assume that we
have a domain X and a range Y taken to be metric spaces, and that are related by a xed but unknown
mapping F . The problem of reconstructing the mapping F is said to be well-posed if three conditions are
satis ed [1]:
1. Existence : For every input vector x
2
X,
there does exist an output y = F (x), where y
2. Uniqueness: For any pair of input vectors x; t
2
X,
2
Y.
we have F (x) = F (t) if, and only if, x = t.
3. Continuity: The mapping is continuous, that is for any > 0 there exists = () such that the
condition dX (x; t) < implies that dY (F (x); F (t)) < , where d( ; ) is the distance between two
arguments in their respective spaces.
If these conditions are not satis ed, the inverse problem is said to be ill-posed. Function approximation is
an ill-posed inverse problem for the following reasons. First there is not as much information in the training
data as we really need to reconstruct the input-output mapping uniquely, hence the uniqueness criterion is
violated. Second, the presence of noise or imprecision in the input data adds uncertainty to the reconstructed
input-output mapping in such a way that an output may be produced outside the range for a giving input
inside the domain, in other words which the continuity condition is violated.
4.1
Regularization Theory
Tikhonov [12] proposed a method called regularization for solving ill-posed problems. In the context of
approximation problems, the basic idea of regularization is to stabilize the solution by means of some auxillary nonnegative functional that embeds prior information, e.g., smoothness constraints on the input-output
8
mapping (i.e., solution to the approximation problem), and thereby make an ill-posed problem into a wellposed one [1][11].
According to Tikhonov's regularization theory, the function is determined by minimizing a cost functional
E ( ), so-called because it maps functions (in some suitable function space) to the real line. It involves two
terms:
F
F
1. Standard error term: This rst term, denoted by Es ( ), measures the standard error between the
desired response i and the actual response i for training example = 1 2
. Speci cally,
F
d
y
i
X
X
=
Es ( ) =
F
; ::; N
( ? i )2
N di
i=1
i=1
;
N
y
([ i ? (xi )]
d
F
(33)
2
2. Regularization term: This second term, denoted by Ec ( ), depends on the geometric properties of the
approximating function (x). Speci cally, we write
F
F
Ec ( ) = jjP jj2
F
(34)
F
where P is a linear di erential operator. Prior information about the form of the solution is embedded
in the operator P, which naturally makes the selection of P problem-dependent. P is referred to as a
stabilizer in the sense that it stabilizes the solution , making it smooth and therefore continuous.
F
The analytical approach used for the situation described here draws a strong analogy between linear differential operators and matrices, thereby placing both types of models in the same conceptual framework.
Thus the symbol jj jj denotes a norm imposed on the function space to which P belongs. By a function
space we mean a normed vector space of functions. Ordinary, the function space used here is 2 space that
consists of all real-valued functions (x), x 2 Rp , for which j (x)j2 is Lebesgue integrable. The function
(x) denotes the actual function that de nes the underlying physical process responsible for the generation
of the input-output pair. Strictly speaking, we require the function (x) to be a member of a reproducing
kernel Hilbert space (RKHS) with a reproducing kernel in the form of the Dirac delta distribution [14]. The
simplest RKHS satisfying the previously mentioned conditions is the space of rapidly decreasing, in nitely
continuously di erentiable functions, that is, the classical space of rapidly decreasing test functions for
the Shawrz theory of distributions, with nite P-induced norm
:
F
L
f
f
f
f
S
Hp
= f 2 : jjP jj 1g
f
S
f
9
<
(35)
where the norm of Pf is taken with respect to the range of P, assumed to be another Hilbert space. The
principal of regularization may now be stated as follows: Find the function F (x) that minimizes the cost
functional E (F ) de ned by
E (F ) = E (F ) + E (F )
= N ([d ? F (x )]2 + jjPF jj2
X
s
c
i
(36)
i
=1
i
where is a positive real number called regularization parameter.
4.2
Regularization Networks
Poggio et al [13] suggested some form of prior information about the input-output mapping to make the
learning problem well posed so that the generalization to new data is feasible. They also suggested a network
structure that they called regularization network. It consists of three layers. The rst layer is composed of
input (source) nodes whose number is equal to the dimension p of the input vector x. The second layer is
a hidden layer, composed of nonlinear units that are connected directly to all the nodes in the input layer.
There is one hidden unit for each data point x , i = 1; 2; :::; N , where N is the number of training examples.
The activation of the individual hidden units are de ned by Green's function G(x; x ) given by [20]
i
i
1
G(x; x ) = exp(? 2
2
i
i
X (x ? x
p
k
k
i;k
=1
)2 )
(37)
This Green's function is recognized to be a multivariate Gaussian function. Correspondingly, the regularized
solution takes on the following special form:
F (x) =
X
N
=1
i
wi exp(?
1
X (x ? x
p
22
i k
k
i;k
=1
)2 )
(38)
which consists of a linear superposition of multivariate Gaussian basis functions with center x (located at
the data points) and widths [1].
i
i
The output layer consists of a single linear unit, being fully connected to the hidden layer. The weights of
the output layer are the unknown coecients of the expansion, de ned in terms of the Green's functions
G(x; x ) and the reqularization parameter by
i
w = (G + I )?1 d
(39)
This regularization network assumes that the Green's function G(x; x ) is positive de nite for all i. Provided
that this condition is satis ed, which it is in the case of the G(x; x ) having the Gaussian form, for example,
i
i
10
then the solution produced by this network will be an optimal interpolant in the sense that it minimizes
the functional E ( ). Moreover, from the viewpoint of approximation theory, the regularization network has
three desirable properties [17]:
F
1. The regularization network is universal approximator in that it can approximate arbitrarily well any
multivariate continuous function on a compact subset Rp , given a suciently large number of hidden
units.
2. Since the approximation scheme derived from regularization theory is linear in the unknown coecients,
it follows that the regularization network has the best approximation property. This means that given
an unknown nonlinear function , there always exists a choice of coecients that approximates better
than all other possible choices.
f
f
3. The solution computed by the regularization network is optimal. Optimality here means that the
regularization minimizes a functional that measures how much the solution deviates from its true value
as represented by the training data.
5
Results
The performance of the classic techniques of Section 1 and the neural network approaches are quantitatively
compared, using synthetic range data for four typical free-form surface patches suggested by Bradley and
Vickers [3], and two surfaces suggested in this paper. The six test surfaces are:
1. Surface 1:
z
=
sin
(0 5 ) +
z
=
: x
(0 5 )
(40)
()
(41)
cos
: y :
2. Surface 2:
( )+
sin x
cos y :
3. Surface 3:
x?
= ?
(
z
e
5)2 +(y
4
?5)2
(42)
:
4. Surface 4:
z
=
( +
tanh x
11
y
? 11)
:
(43)
5. Surface 5:
z = Rectangular Box:
(44)
z = Pyramid:
(45)
6. Surface 6:
The rst two surfaces are bivariate sinusoidal functions with Surface 2 having twice the spatial frequency
of Surface 1. Many consumer items are typically composed of smoothly varying free-form patches similar
to Surface 1 while Surface 2 is less commonly found. Surface 3 has a peaked form and Surface 4 is smooth
with a sharp ridge diagonally across it. Surface 5 and 6 have sharp edges and were included to test the
modeling techniques on discontinuous surfaces. Data sets were generated, using the six surfaces with each
set consisting of 2500 points contained in a rectangular patch with x and y varying from 0.0 to 10.0. Testing
of these surface tting methods has been done by local interpolation over 8 x 8 overlapping square regions.
All methods were applied to a data set mesh of 900 points contained in the same rectangular patch as the
original data set [9].
From the results, we can deduce that the Hardy's Multiquadratic Interpolation provides a large improvement over the Shepard's Interpolation and the Thin Plate Splines method for the rst four surfaces, while
performing the worst of the three methods for the sharp edge surfaces. The reconstructed surfaces using all
interpolation techniques are shown in Figures 1-6.
The interpolations using the neural networks perform exceptionally well on the rst four surfaces, but due to
the sharp edges in Surfaces 5 and 6, their performance was not as good. The networks seem to have diculty
with sharp transitions and discontinuities. One method that could be employed is to use a block training
method in which the neural network learns the surfaces in smaller patches. This should decrease learning
time and make the training cycles less complicated, although it creates the need for local regions. During
training, the weights are updated for each training point; the error for that training point and corresponding
weight set is calculated. When using a large number of training pairs, the weights are signi cantly changed
from beginning to end of the training cycle. Therefore, the calculated error is not equivalent to the true
error based on this nal weight set. This is due to the fact that each error calculation has been made with a
di erent weight set. A correction for this can be done by simply recalculating the true error using the nal
weight set at the end of each training cycle.
12
It is clear that the neural networks method produced smooth surfaces compared to those produced by Hardy's
Multiquadratic Interpolation without the need of constructing local regions. In general, the neural networks
approach is far superior in terms of the ease of implementation. The results also show the potential of
Functional Link Neural Network as an approximator; it is easier to implement than the Multi-Layer Neural
Network and faster to converge [9]. However, neither of the two networks performed well on Surface 6,
possibly due to the presence of two discontinuities.
A new approach for free-form surface modeling is to combine neural networks with classical techniques into
a new hybrid interpolation method. The surface can be divided into smoothly varying portions and areas of
sharp transitions. The neural network could then be used to approximate these smoothly varying areas while
a classical technique like Hardy's Multiquadric Interpolation could be used on the areas of sharp transition.
6 Applications
Recently, laser range nders (also known as 3-D laser scanners) have been employed to scan multiple views
of an object. The scanner output is usually an unformatted le of large size (known as the "cloud of data")
[3][31]. In order to use the cloud of data for surface reconstruction, a registration technique is implemented
to make correspondence with the actual surface. Laser scanners are convenient in applications where the
object is irregular but, in general, smooth. The corresponding surfaces of these objects are denoted by
"free-form surfaces". As Besl [31] stated, a free-form surface is not constrained to be piecewise-planar,
piecewise-quadratic, piecewise-superquadratic, or cylindrical. However, the free-form surface is smooth in
the sense that the surface normal is well-de ned and continuous everywhere (except at isolated cusps, vertices, and edges). Common examples of free-form surface shapes include human faces, cars, airplanes, boats,
clay models, sculptures, and terrain.
Historically, free-form shape matching using 3-D data was done earliest by Faugeras and his group at INRIA
[33], where they demonstrated e ective matching with a Renault auto part (steering knuckle) in the early
1980's. This work popularized the use of quaternions for least squares registration of corresponding 3-D
point sets in the computer vision community. The alternative use of the singular value decomposition (SVD)
algorithm was not as widely known at that time frame. The primary limitation of this work was that it
relied on the probable existence of reasonable large planar regions within a free-form shape.
There exists a number of studies dealing with global shape matching or registration of free-form surfaces
13
on limited classes of shapes. For example, there has been a number of studies on point sets with known
correspondence [31]-[35]. Bajcsy [34] is an example of studies on polyhedral models and piecewise-models.
As we indicated above, the output of the laser scanner (the cloud of data) is in a form of sparse unformatted
le. The goal is to build a model of the physical surface using this data. Two issues need to be examined:
(i) how to t a surface using the data from a single view; and (ii) how to merge the data from multiple views
to build an overall 3-D model for the object. Bradley and Vickers [3], surveyed a number of algorithms for
surface reconstruction using the cloud of data of one viewpoint. Results were shown on basic surfaces like
sinusoids and exponentials. They also suggested an algorithm for surface modeling based on the following
steps: (i) divide the cloud of data into meshes of smaller sizes; (ii) t a surface using a subset of data
points on each mesh; (iii) merge the surfaces of the meshes together. This approach has been shown to be
convenient for simple surfaces with little or no occlusion.
References
[1] S. Haykin. Neural Networks: a comprehensive foundation. Macmillan College publishing Company, 1994.
[2] D. Shepard. A two-dimensional interpolation function for irregularly spaced data. Proc. ACM Nat. Conf.,
pages 517{524, 1964.
[3] C. Bradley and G. Vickers, Free-form Surface Reconstruction for Machine Vision Rapid Prototyping,
Optical Engineering, Vol. 32, No. 9, pp. 2191-2200, September 1993.
[4] B. Bhanu and C. C. Ho, CAD-based 3-D object representation for robot vision, IEEE Comput., Vol. 20,
No. 8, pp. 19-36, Aug. 1987.
[5] R. L. Hardy. Multiquadratic equations of topography and other irregular surfaces,interpolation using
surface splines. J. Geophys, 76, 1971.
[6] J. Zurada. Introduction to Arti cial Neural Systems. West publishing, 1992.
[7] K. Hornik and M. Stinchombe. Multilayer feedforward networks as universal approximators. Neural
Networks, pages 359{366, 1989.
[8] K. I. Funanashi. On the approximate realization of continuous mappings by neural networks. Neural
Networks, pages 183{192, 1989.
14
[9] Mohamed N. Ahmed and Aly a. Farag. Free form surface reconstruction using neural networks. Proc of
ANNIE'94, (1):51{56, 1994.
[10] P. H. Pao. Adaptive Pattern Recognition and Neural Networks. Addison Wesley pub, 1989.
[11] Courant, R. and D. Hilbert, Methods and Mathematical Physics, Vols 1 and 2, New York, Wiley 1970.
[12] Tikhonov, A. N., On solving incorrectly posed problems and method of regularization, Doklady Akademii
Nauk USSR,151, pp. 501-504, 1963.
[13] T. Poggio and F.Girosi. Networks for approximation and learning. Proc. IEEE, pages 1481{1497, 1990.
[14] Poggio, T., A theory of how the brain might work. C old Spring Harbor Symposium on Quantitative
Biology, 55,899-910,1990.
[15] Poggio, T., and S. Edelman, A network that learns to recognize three-dimensional objects. N ature,
(London) 343,263-266,1990.
[16] Poggio, T., and F. Girosi, Regularization algorithms for learning that are equivalent to multilayer
networks. S cience, 247,978-982,1990.
[17] Poggio, T., V. Torre, and C. Koch, Computational vision and regularization theory. N ature (London)
317,314-319,1985.
[18] Broomhead, D.S. and D. Lowe, Multivariable functional interpolation and adaptive networks, Complex
Systems, 2, pp. 321-355, 1988.
[19] Broomhead, D.S., D. Lowe and A.R. Webb, A sum rule satis ed by optimized feedforward layered
networks, RSRE Memorandum No. 4341, Royal Signals and Radar Establishment, Malvern, UK, 1989.
[20] Powell,M.J.D., Radial basis functions for multivariate interpolation: A review, IMA Conference on
Algorithms for the Approximation of functions and Data, pp. 143-167, RMCS, Shivenham, UK, 1985.
[21] Powel, M.J.D., Radial basis function approximations to polynomials, Numerical Analysis 1987 Proceedings, pp. 223-241, Dundee, UK, 1988.
[22] Lowe, D., Adaptive radial basis function nonlinearities, and the problem of generalization, 1st IEE
International Conference on Arti cal Neural Networks, pp. 171-175, London, UK, 1989.
[23] Lowe, D., On the iterative inversion of RBF networks: A statistical interpretation, 2nd IEE International
Conference on Arti cal Neural Networks, pp. 29-33, Bournemouth, UK, 1991.
15
[24] Lowe, D. and A.R. Webb, Adaptive networks, dynamical systems, and the predictive analysis of time
series, 1st IEE International Conference on Arti cal Neural Networks, pp. 95-99, London, UK, 1989.
[25] Girosi F., and T. Poggio, Representative properties of networks: Kolmogorov's theorem is irrelevant,
Neural Computation, 1, pp. 465-469, 1989.
[26] Girosi F., and T. Poggio, Networks and best approximation property, Biological Cybernetics, 63, pp.
169-176, 1990.
[27] Dorny, C. N., A Vector Space Approach to Models and Optimization, New York, Wiley 1975.
[28] Simard, P. and Y. LeCun,Reverse TDNN: An architecture for trajectory generation, Advances in Neural
Information Processing Systems 4, pp. 579-588, Baltimore, MD, 1992.
[29] Tikhonov, A. N., On regularization of ill-posed problems, Doklady Akademii Nauk USSR,153, pp. 49-52,
1973.
[30] Tikhonov, A. N. and V. Y. Arsenin,Solutions of Ill-posed Problems, Washington, DC, W.H. Winston,
1977.
[31] P. J. Besl, Free-form Surface Matching, In H. Freeman (Ed.), Machine Vision for Three-Dimensional
Scenes, Academic Press, San Diego, CA, pp. 25-71, 1990.
[32] P. Besl and D. McKay, A method for registration of 3-D shapes, IEEE Transactions on Pattern Analysis
and Machine Intelligence, PAMI-14, No. 2, pp. 239-256, Feb. 1992.
[33] O. D. Faugeras and M. Herbert, The representation, recognition, and locating of 3-D objects, Int. J.
Robotics Res., Vol. 5, No. 3, pp. 27-52, 1986.
[34] R. Bajcsy and F. Solina, Three-dimensional object representation, Proc. 1st. Inter. Conf. Comput.
Vision, (London), June 8-11, 1989, pp. 231-240.
[35] D. Terzopoulos, J. Platt, A. Barr, and K. Fleischer, "Elastically deformable models," Comput. Graphics,
Vol. 21, No. 4, pp. 205-214, July 1987.
[36] R. Franke. Scattered data interpolation. Math Comput., (38):181{200, 1982.
16
z
z
10
x
y
10
10
x
10
(a)
(b)
z
z
10
x
y
10
10
x
y
10
(c)
(d)
z
z
10
x
y
y
10
10
x
(e)
y
10
(f)
Figure 1: Comparison of reconstruction of Surface 1 using all methods: (a) original surface (b) Shepard
Interpolation (c) Thin Plate B-Spline (d)Hardy's multiquadric Interpolation (e) MultiLayer neural Network
and (f) Functional link Neural Network
17
z
z
10
x
y
10
10
x
10
(a)
(b)
z
z
10
x
y
10
10
x
y
10
(c)
(d)
z
z
10
x
y
y
10
10
x
(e)
y
10
(f)
Figure 2: Comparison of reconstruction of Surface 2 using all methods: (a) original surface (b) Shepard
Interpolation (c) Thin Plate B-Spline (d)Hardy's multiquadric Interpolation (e) MultiLayer neural Network
and (f) Functional link Neural Network
18
z
z
10
10
y
y
10 x
10 x
(a)
(b)
z
z
10
10
y
10 x
y
10 x
(c)
(d)
z
z
10
y
y
10 x
10 x
(e)
(f)
Figure 3: Comparison of reconstruction of Surface 3 using all methods: (a) original surface (b) Shepard
Interpolation (c) Thin Plate B-Spline (d)Hardy's multiquadric Interpolation (e) MultiLayer neural Network
and (f) Functional link Neural Network
19
z
z
y
y
10
10
0
0
10 x
10 x
(a)
(b)
z
z
y
y
10
10
0
0
10 x
10 x
(c)
(d)
z
z
y
y
10
10
0
0
10 x
10 x
(e)
(f)
Figure 4: Comparison of reconstruction of Surface 4 using all methods: (a) original surface (b) Shepard
Interpolation (c) Thin Plate B-Spline (d)Hardy's multiquadric Interpolation (e) MultiLayer neural Network
and (f) Functional link Neural Network
20
z
z
y
y
10
10
0
0
10 x
10 x
(a)
(b)
z
z
y
y
10
10
0
0
10 x
10 x
(c)
(d)
z
z
y
y
10
10
0
0
10 x
10 x
(e)
(f)
Figure 5: Comparison of reconstruction of Surface 5 using all methods: (a) original surface (b) Shepard
Interpolation (c) Thin Plate B-Spline (d)Hardy's multiquadric Interpolation (e) MultiLayer neural Network
and (f) Functional link Neural Network
21
z
z
y
y
10
10
0
0
10 x
10 x
(a)
(b)
z
z
y
y
10
10
0
0
10 x
10 x
(c)
(d)
z
z
y
y
10
10
0
0
10 x
10 x
(e)
(f)
Figure 6: Comparison of reconstruction of Surface 6 using all methods: (a) original surface (b) Shepard
Interpolation (c) Thin Plate B-Spline (d)Hardy's multiquadric Interpolation (e) MultiLayer neural Network
and (f) Functional link Neural Network
22