AIDSP Merged Merged
AIDSP Merged Merged
AIDSP Merged Merged
Return 0;
-----------------------------------------------------------------
• Below is the code shown listing the FIR filter in mini-proj 2 Suppose x1=0.65 and x2=0.6 for a simple neural
revise the code to implement LMS adaptive FIR filter network net connection shown below. Show op y
Assume that you given d(n) any other assumptions you in binary form from for “int5” low bit precision(5-
may make should be written bit signed fixed point ) repersentation
#include “fir.h”
Void fir(data_t*y.coef_tmu,data_tx,data_td)/
//d(n) is passed on the function fir
{ static data_t_shift_reg[N];
Acc_t_acc;
Data_t_data;
Int I,j
//declaring extra variables required
Double error;
Double w[N] ={0.0};
Acc=0;
// code changed from here
LMS Accum Loop: for (j=0;j<N;j++) {
For(i=j;i>j-N;i++) {
If(i>0)
Shift reg[N+(i-j)-1] – x[i] // LMS FIR filter
Else break;
Y=0; //filter o/p set to 0
For(i=0;i<N;i++)
Acc++(w[i]*shift_reg[i]); //calculate filter output
Error = d-acc;// calculate error
For(i=0;i<N;i++)
W[i] = w[i] +(mu*E*shift_reg[i]);//adaptive error
adjustment
Y=acc;
------------------------------------
• In DNN training determines weight but __ applies weight
to determine output? Independence or impedence
• Which DNN has feed back connections? RNN
• Which is correct ReLU activation function / y=max(0,x)
• LPF is analogus to Wine
• Which is not correct convolution of filter mask of the
laplician convolution
1 1110 4 0111
• What is the fixed interval or resolution of decimal between
64 and 128 is 2*-4 or 4
• Which is not correct statement on the figure below
It is called sobel operation-- answer
-1 0 1 111
-1 0 1 000
-1 0 1 sx -1 -1 -1 sy
65. Show the lookup table by filling out the blank spaces In the Verilog code for computing y arithmetic (DA). Coefficient are signed and a1 - 0100, a2 • 1010,
and a3» 0011.
module case8 (table_in, table_out);
// the 3 coefficients
always @(table_in)
begin
case (table_in)
0: table_ out = ( );
end
endmodule
OP:
table_out = 0 x (4) + 0 x (-6) + 0 x (3) = 0000; //0
table_out = 0 x (4) + 0 x (-6) + 1 x (3) = 0011; //a3
table_out = 0 x (4) + 1 x (-6) + 0 x (3) = 1010; //a2
table_out = 0 x (4) + 1 x (-6) + 1 x (3) = 1101; //a2+a3
table_out = 1 x (4) + 0 x (-6) + 0 x (3) = 0100; //a1
table_out = 1 x (4) + 0 x (-6) + 1 x (3) = 0111; //al+a3
table_out = 1 x (4) + 1 x (-6) + 0 x (3) = 1110;//a1+a2
table_out = 1 x (4) + 1 x (-6) + 1 x (3) = 0001; //alta2+a3
65b. Incorrect statement? Low pass filter maybe used to highlight a portion of an image.
65c. Incorrect statement? SP FP multiplier is 3 times as big as HP FP multiplier.
66. What kind of logical operation can be implemented using a perceptron with decision boundary x1 + x2 = 1.5? Assume x1 and x2 are inputs and take 0 or 1 as
values. Consider the modified perceptron function AND
67. Which of the following methods can be used to improve the performance if 10 different neural networks are trained on the same dataset (w/o training any
other model)? Ensemble of 10 models.
68. Which of the following is FALSE about AlexNet? The input size for AlexNet is 227x227x1.
69. Make comparison for the overall speed of the following algorithms. (Assume that you have no knowledge of the hardware utilized for the algorithms). Each
item should be selected once. Stochastic> Full-batch> Normal equations
Normal equations –
Full-batch Gradient Descent -
Stochastic Gradient Descent –
70. Which of the following statement is FALSE about training RNNs? The cost function for backpropagation algorithm for MLP and BPTT are exactly the
same.
71. The convergence behaviors of the full-bath gradient descent algorithm for a linear regression problem with one input variables is displayed as <curve>.
X-axes denote the weight w; y-axis denotes the cost function. Initial value for the weight is selected as w = 10. And then the gradient descent algorithm steps are
shown by the full circles while the solution converges to w =0. Which of the following statements can be true about the learning rate? Multiple answers are
possible. Learning rate decreases over iteration steps & The learning rate first increase then decreases.
72. Consider the binary classification problem using the datagram using the dataset given in the figure. The squares are positive sample (Y=1) and the circles are
negative sample (Y= -1). There are n= 20 data points. The dataset is balanced. <chart>. Assume a classifier uses the following rule to make a decision. If X2>3,
yhat= +1; otherwise yhat= -1. Which of the following statements is FALSE for this dataset and the classifier? The F1score of the classifier is about 0.345
73. <Diag>. Fig.3. Architecture of our 8-layer convent model. A 224 by 224 crop of an image (with 3 color planes) is presented as the input. This is convoluted
with 96 different 1st layer filters (red), each of the size 7 by 7, using the stride of 2 in both x and y. The resulting feature maps are then: i) passed through a
rectified linear function (not shown), ii) pooled (max within 3x3 regions, using stride 2) and iii) contrast normalized across feature maps to give 96 different 55
by 55 element features maps. Similar operations are repeated in layers 2, 3, 4, 5. The last two layers are fully connected, taking features from the top
convolutional layer as input in vector form (6.6.256 = 9216 dimensions). The final layer is a C-way SoftMax function, C being the number of classes. All filters
and features maps are square in shape.
A CNN architecture is given as above. What is the number of parameters needed in the first convolution layer? (7x7x3+1)x96
74. Put the following in the correct order in the back propagation algorithm: forward pass, backward pass, initialize weights and biases randomly, weight update,
initialize weights and biases to zero. Start with step 0.
Forward pass – Step 1
Backward pass – Step 2
initialize weights and biases to zero – Not used
initialize weights and biases randomly- Step 0
update weights – Step 3
repeat steps 1 through 3- Step 4
repeat steps 0 through 3 – Not used
repeat steps 2 through 3 – Not used
diagonal pass – Not used
75. For a multi class classification problem, during the back propagation algorithm, the local gradient at a neuron j is calculated using Ցj(n) = phi’j (vj(n))ej(n).
Which of the following statement is FALSE? The neuron j belongs to output layer
76. Which of the following functions can be used as an activation function in the output layer if we wish to predict the probabilities of n classes (n > 2)?
SoftMax and RELU
77. A polynomial regression problem is solved using Normal Equations:
Wopt = (XTX)-1 XTy (Normal Equations)
Where X = 1 12 and y = 2
1 22 6
1 32 8
1 42 7
1 52 11
Wopt = 3.5353
0.2968