O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

ASP UNIT 1 QUESTIONBANK ANSWERS.pdf

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
ADAPTIVE SIGNAL PROCESSING (18PE0EC3B)
QUESTION BANK
UNIT-I
PART-A
1.What are different kinds of information processing op...
6. What does the word “Tracking” mean?
A:-
When an adaptive filtering algorithm operates in a nonstationary environment, t...
PART-B
1.Explain different kinds of information processing operations.
A:-
The three basic kinds of information-processing...
Anúncio
Anúncio
Próximos SlideShares
OB UNIT-II.pdf
OB UNIT-II.pdf
Carregando em…3
×

Confira estes a seguir

1 de 13 Anúncio
Anúncio

Mais Conteúdo rRelacionado

Mais recentes (20)

Anúncio

ASP UNIT 1 QUESTIONBANK ANSWERS.pdf

  1. 1. ADAPTIVE SIGNAL PROCESSING (18PE0EC3B) QUESTION BANK UNIT-I PART-A 1.What are different kinds of information processing operations? A:-  Filtering: - Filtering is commonly used to extract the information from noisy data  Smoothing:- It is used for blurring and noise reduction  Prediction: - It predicts the present samples with the previous sample. 2.Differentiate Linear and Non-linear filters. A:- A filter is said to be linear if the filtered, smoothed, or predicted quantity at the output of the filter is a linear function of the observations applied to the filter input. Otherwise, the filter is nonlinear. Weiner Filter is used in Linear filters and Adaptive-Wiener Filter is used in Non linear filters 3. What is statistical approach to the solution of the linear filtering problem? A:- In Statistical approach to the solution of the linear filtering problem is to minimize the mean square value of the error signal. For stationary inputs, the result solution is commonly known as Wiener Filter. Mean Square value of the error signal is defined as the difference between some desired response and actual filter output 4. Define error-performance surface. A:- A plot of the mean-square value of the error signal versus the adjustable parameters of a linear filter is referred to as the error-performance surface. 5. When is a filter said to be optimum? A:- An electric filter in which the mean square value of the error between a desired output and the actual output is at a minimum the filter is said to be optimum filter
  2. 2. 6. What does the word “Tracking” mean? A:- When an adaptive filtering algorithm operates in a nonstationary environment, the algorithm is required to track statistical variations in the environment. The tracking performance of the algorithm, is influenced by two features: (1) rate of convergence and (2) steady-state fluctuation due to algorithm noise. 7. Mention the issues of concern for Computational requirements of an adaptive filter. A:- Here the issues of concern include (a) the number of operations (i.e., multiplications, divisions, and additions/subtractions) required to make one complete adaptation cycle of the algorithm, (b) the size of memory locations required to store the data and the program, and (c) required to program the algorithm on a computer. 8. Mention the issues of concern for Numerical properties of an adaptive filter. A:- When an algorithm is implemented numerically, inaccuracies are produced due to quantization errors, which in turn are due to analog-to-digital conversion of the input data and digital representation of internal calculations. An adaptive filtering algorithm is said to be numerically robust when it is insensitive to variations in the word length used in its digital implementation. 9. How does quantization affect the accuracy and stability of an adaptive filter. A:- The source of quantization errors that poses a serious design problem, there are two basic issues of concern: numerical stability and numerical accuracy Numerical stability is an inherent characteristic of an adaptive filtering algorithm. Numerical accuracy, on the other hand, is determined by the number of bits (i.e., binary digits) used in the numerical representation of data samples and filter coefficients. 10. What are the basic processes of a linear adaptive filtering algorithm? A:- The basic processes of a linear adaptive filtering algorithm involve the following processes: (1) A filtering process designed to produce an output in response to a sequence of input data (2) An adaptive process, the purpose of which is to provide a mechanism for the adaptive control of an adjustable set of parameters used in the filtering process.
  3. 3. PART-B 1.Explain different kinds of information processing operations. A:- The three basic kinds of information-processing operations are filtering, smoothing, and prediction, each of which may be performed by an estimator. • Filtering: - It is an operation that involves the extraction of information about a quantity of interest at time t by using data measured up to and including t. • Smoothing: - It is an a posteriori (i.e., after the fact) form of estimation, in that data measured after the time of interest are used in the estimation. Specifically, the smoothed estimate at time t′ is obtained by using data measured over the interval [0, t′], where t′ < t. There is therefore a delay of t − t′ involved in computing the smoothed estimate. The benefit gained by waiting for more data to accumulate is that smoothing can yield a more accurate estimate than filtering. • Prediction: - It is the forecasting side of estimation. Its aim is to derive information about what the quantity of interest will be like at some time t + t in the future (for some t > 0) by using data measured up to and including time t. From the figure, it is apparent that both filtering and prediction are real-time operations, whereas smoothing is not. By a real-time operation, we mean an operation in which the estimate of interest is computed on the basis of data available now.
  4. 4. 2. Explain the numerical properties of an adaptive algorithm. A:- When an algorithm is implemented numerically, inaccuracies are produced due to quantization errors, which in turn are due to analog-to-digital conversion of the input data and digital representation of internal calculations. Ordinarily, it is the latter source of quantization errors that poses a serious design problem. In particular, there are two basic issues of concern: numerical stability and numerical accuracy. Numerical stability is an inherent characteristic of an adaptive filtering algorithm. Numerical accuracy, on the other hand, is determined by the number of bits (i.e., binary digits) used in the numerical representation of data samples and filter coefficients. An adaptive filtering algorithm is said to be numerically robust when it is insensitive to variations in the word length used in its digital implementation. 3.Describe, in brief the parameters of comparison among different adaptive algorithms. A:- A recursive algorithm whereby the parameters of an adaptive filter are updated from one adaptation cycle to the next, the parameters become data dependent. A wide variety of recursive algorithms have been developed in the literature for the operation of linear adaptive filters. The choice of one algorithm over another is determined by one or more of the following factors: - • Rate of convergence: - As the number of adaptation cycles required for the algorithm, in response to stationary inputs, to converge to the optimum Wiener solution in the mean-square-error sense. A fast rate of convergence allows the algorithm to adapt rapidly to a stationary environment of unknown statistics. • Misadjustment: - This parameter provides a quantitative measure of the amount by which the final value of the mean-square error over an ensemble of adaptive filters from the Wiener solution. • Tracking: - When an adaptive filtering algorithm operates in a nonstationary environment, the algorithm is required to track statistical variations. The tracking performance of the algorithm is influenced by two contradictory features: (1) rate of convergence and (2) steady-state fluctuation due to algorithm noise. • Robustness: - For an adaptive filter to be robust, small disturbances can only result in small estimation errors. The disturbances may arise from a variety of factors, internal or external to the filter. • Computational requirements: - Here the issues of concern include (a) the number of operations required to make one complete adaptation cycle of the algorithm, (b) the size of memory locations required to store the data and the program, and (c) the investment required to program the algorithm on a computer.
  5. 5. • Structure: - The structure of information flow in the algorithm, determining the manner in which. it is implemented in hardware form. For example, an algorithm whose structure exhibits high concurrency is well suited for implementation using very large-scale integration (VLSI). • Numerical properties: - It is the latter source of quantization errors that poses a serious design problem. In particular, there are two basic issues of concern: numerical stability and numerical accuracy. Numerical stability is an inherent characteristic of an adaptive filtering algorithm. Numerical accuracy is determined by the number of bits used in the numerical representation of data samples and filter coefficients. 4. Write about the two families of Linear Adaptive Filtering algorithms. A:- The LMS and RLS algorithms constitute two basic algorithms each of family of algorithms is formulated. The adaptive filtering algorithms differ from each other in the way in which the filtering structure is configured. However, regardless of the filtering structure used around which the adaptation of parameters is performed, the algorithms within each family inherit certain properties rooted in the LMS and RLS algorithms. Specifically:  LMS-based algorithms are model independent, in the sense that there are no statistical assumptions made in deriving them. The adaptive filtering algorithm resulting from this approach may be expressed in words as follows: The learning-rate parameter determines the rate at which the adaptation is performed. The recursive algorithm so described is called the least-mean-square (LMS) algorithm, which is simple in computational terms yet effective in performance. However, its convergence behavior is slow and difficult to study mathematically • RLS-based algorithms are model dependent, in that their derivatives assume the use of a multivariate Gaussian model.
  6. 6. The minimization is achieved using algebraic matrix manipulations, resulting in an update rule that may be expressed, in words, as follows: Where innovation is new information put in to the filtering process art the updating time. The adaptative filtering algorithm is a called as Recursive Least Squares (RLS) Algorithm 5. Discuss the different approaches to the development of Linear Adaptive Filters? A:- Basically, we may identify two distinct approaches for deriving recursive algorithms for the operation of linear adaptive filters: - i) Method of Stochastic Gradient Descent: - The stochastic gradient approach uses a FIR filter, as the structural basis for implementing the linear adaptive filter. For the case of stationary inputs, the cost function, also referred to as the index of performance, this cost function is precisely a second-order function of the tap weights in the FIR filter. The dependence of the mean-square error on the unknown tap weights may be viewed to be in the form of a multidimensional paraboloid. This paraboloid as the error-performance surface; the tap weights corresponding to the minimum point of the surface define the optimum Wiener solution. To develop a recursive algorithm for updating the tap weights of the adaptive FIR filter using the stochastic gradient approach, as the name would imply it, we need to start with a stochastic cost function and differentiating this stochastic cost function with respect to the tap-weight vector of the filter, we obtain a gradient vector that is naturally stochastic. The adaptive filtering algorithm resulting from this approach may be expressed in words as follows: However, its convergence behavior is slow and difficult to study mathematically
  7. 7. ii) Method of Least Squares: - The second approach to the development of linear adaptive filtering algorithms is based on the method of least squares. According to this method, we minimize a cost function that is defined as the sum of weighted error squares, where the error, itself defined as the difference between some desired response and the actual filter output. Unlike the method of stochastic gradient, this minimization is achieved using algebraic matrix manipulations, resulting in an update rule that may be expressed, in words, as follows: 6.Describe the areas of application of Adaptive Filters. A:- The areas of application of Adaptive Filters are the Four Classes of Applications a) Identification: - In the class of applications dealing with identification, an adaptive filter is used to provide a linear model that represents the best fit (in some sense) to an unknown plant. The plant and the adaptive filter are driven by the same input. The plant output supplies the desired response for the adaptive filter. If the plant is dynamic in nature, the model will be time varying. b) Inverse Modeling: - In this second class of applications, the function of the adaptive filter is to provide an inverse model that represents the best fit to an unknown noisy plant.
  8. 8. Ideally, the inverse model has a transfer function equal to the reciprocal of the plant’s transfer function, such that the combination of the two constitutes an ideal transmission medium. A delayed version of the plant (system) input constitutes the desired response for the adaptive filter. c) Prediction: - Here, the function of the adaptive filter is to provide the best prediction (in some sense) of the present value of a random signal. The present value of the signal thus serves the purpose of a desired response for the adaptive filter. Past values of the signal supply the input applied to the filter. Depending on the application of interest, the adaptive filter output or the estimation (prediction) error may serve as the system output. d) Interference cancellation: - In this final class of applications, the adaptive filter is used to cancel unknown interference contained in a primary signal, with the cancellation being optimized in some sense.
  9. 9. The primary signal serves as the desired response for the adaptive filter. A reference signal is employed as the input to the filter. The reference signal is derived from a set of sensors located such that it or they supply the primary signal in such a way that the information bearing signal component is weak or essentially undetectable. 7. Explain Correlation matrix using established denotations. A:-
  10. 10. 8. State all the properties of Correlation matrix. A:- Property 1: The correlation matrix of a stationary discrete-time stochastic process is Toeplitz. We say that a square matrix is Toeplitz if all the elements on its main diagonal are equal and if the elements on any other diagonal parallel to the main diagonal are also equal. The above equation follows directly from the below equation we see that all the elements on the main diagonal are equal to r(0), all the elements on the first diagonal above the main diagonal are equal to r(1), all the elements along the first diagonal below the main diagonal are equal to r*(1), and so on for the other diagonals. We conclude, therefore, that the correlation matrix R is Toeplitz. It is important to recognize, however, that the Toeplitz property of the correlation matrix R is a direct consequence of the assumption that the discrete-time stochastic process represented by the observation vector u(n) is wide-sense stationary. Indeed, we may state that if the discrete-time stochastic process is wide-sense stationary, then its correlation matrix R must be Toeplitz; and, conversely, if the correlation matrix R is Toeplitz, then the discrete-time stochastic process must be wide-sense stationary
  11. 11. Property 2: The correlation matrix of a discrete-time stochastic process is always nonnegative definite and almost always positive definite Let a be an arbitrary (nonzero) M-by-1 complex-valued vector. Define the scalar random variable ‘y’ as the inner product of a and the observation vector u(n), as shown by y = aH u(n). Taking the Hermitian transpose of both sides and recognizing that y is a scalar, we get y* = uH (n)a, A Hermitian form that satisfies this condition for every non-zero a is said to be non-negative definite or positive semidefinite. Accordingly, we may state that the correlation matrix of a wide-sense stationary process is always nonnegative definite.
  12. 12. 9. Prove that the correlation matrix of a discrete-time stochastic process is always nonnegative definite and almost always positive definite A:- Let a be an arbitrary (nonzero) M-by-1 complex-valued vector. Define the scalar random variable ‘y’ as the inner product of a and the observation vector u(n), as shown by y = aH u(n). Taking the Hermitian transpose of both sides and recognizing that y is a scalar, we get y* = uH (n)a, A Hermitian form that satisfies this condition for every non-zero a is said to be non-negative definite or positive semidefinite. Accordingly, we may state that the correlation matrix of a wide-sense stationary process is always nonnegative definite.
  13. 13. 10. Prove that the correlation matrix of a stationary discrete-time stochastic process is Toeplitz. A:- We say that a square matrix is Toeplitz if all the elements on its main diagonal are equal and if the elements on any other diagonal parallel to the main diagonal are also equal. The above equation follows directly from the below equation we see that all the elements on the main diagonal are equal to r(0), all the elements on the first diagonal above the main diagonal are equal to r(1), all the elements along the first diagonal below the main diagonal are equal to r*(1), and so on for the other diagonals. We conclude, therefore, that the correlation matrix R is Toeplitz. It is important to recognize, however, that the Toeplitz property of the correlation matrix R is a direct consequence of the assumption that the discrete-time stochastic process represented by the observation vector u(n) is wide-sense stationary. Indeed, we may state that if the discrete-time stochastic process is wide-sense stationary, then its correlation matrix R must be Toeplitz; and, conversely, if the correlation matrix R is Toeplitz, then the discrete-time stochastic process must be wide-sense stationary

×