ASP UNIT-2 Question Bank

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

UNIT 2

PART A
1)What are different approaches of optimization of a linear discrete time
filter?
There are two approaches of optimization of a linear discrete time filter:
1.Principle of Orthogonality approach
2.Error Performance Surface approach
2)State Principle of orthogonality.
The necessary and sufficient condition for the cost function J to attain its
minimum value is for the corresponding value of the estimation error eo(n) to
be orthogonal to each input sample that enters into the estimation of the desired
response at time n.

3)Mention the significance of Wiener- Hopf equation.


The system of equations defines the optimum filter coefficients, in the most
general setting, in terms of two correlation functions: the autocorrelation
function of the filter input and the cross-correlation between the filter input and
the desired response.These equations are called the Wiener–Hopf equations.

4)How is Error performance surface of an FIR filter characterized?

Equation states that, for the case when the tap inputs of the FIR filter and
the desired response are jointly stationary, the cost function, or mean-square
error, J is precisely a second-order function of the tap weights in the filter.
Consequently, we may visualize the dependence of J on the tap weights w0, w1,
c, wM-1 as a bowl-shaped 1M + 12-dimensional surface with M degrees of
freedom represented by the tap weights of the filter; it is 1M + 12-dimensional
because we have the variance sd 2 plus the M tap weights which are all needed
to fully describe the cost function J. This surface is characterized by a unique
minimum. For obvious reasons, we refer to the surface so described as the error-
performance surface of the FIR filter

5)Justify that Steepest Descent method is a deterministic search method.


6)Mention the distinctive features of LMS algorithm.

7) Compare Localized Optimality of LMS algorithm and Stochastic


Gradient Descent method.
8) Mention the role of Learning curves.

9) Define Misadjustment with respect to LMS algorithm.


10) How does Rate of convergence of LMS algorithm depend on Step-size
parameter?
PART B

1.Define the problem of Linear Optimum Filtering.


2. Derive the Principle of orthogonality.
3.Derive Wiener-Hopf equations and find its solution for FIR filters.
5.How Newton’s method removes the limitation of Steepest Descent
algorithm?
6. Explain the principles of Stochastic Gradient Descent method.

7. How is Stochastic Gradient Descent method applied in Least Mean


Square (LMS) algorithm?
8.Discuss different kinds of Learning curves.

9. Explain Transient behavior and Convergence considerations of LMS


algorithm.
10. Deduce the relation of Misadjustment ?
4. Explain Steepest Descent algorithm by finding corresponding
mathematical equations.

You might also like