Numericals

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

● In a Convolutional Neural Network (CNN) designed for image classification, if the

input image is a 28x28 grayscale image and the first layer is a convolutional layer
with a filter size of 3x3, stride of 1, and no padding, followed by a max-pooling
layer with a 2x2 filter and stride of 2, what are the dimensions of the output after
these two layers?
○ 13x13
○ 14x14
○ 26x26
○ 28x28
● Consider a CNN with an input layer taking in 32x32x3 (height x width x channels)
color images. If the first convolutional layer uses 6 filters of size 5x5 with stride 1
and 'same' padding, followed by a 2x2 max-pooling layer with stride 2, what are
the dimensions of the output feature maps just after the pooling layer?
○ 14x14x6
○ 16x16x6
○ 28x28x6
○ 32x32x6
● In a CNN architecture for detecting objects in 64x64 RGB images, if the sequence
of layers is as follows: a convolutional layer with 10 filters of size 7x7, stride 1,
and 'valid' padding, followed by a max-pooling layer with a 2x2 filter and stride 2,
what is the size of the output feature maps after these operations?
○ 29x29x10
○ 30x30x10
○ 58x58x10
○ 31x31x10
● In a simple artificial neural network (ANN) consisting of one input layer with three
neurons, a hidden layer with three neurons, and an output layer with one neuron,
if the input values are (2, 1, 3), all weights are 1, and all biases are zero, using a
linear activation function, what would be the output of the network?
○ 6
○ 9
○ 12
○ 18
● Consider a neural network with two input neurons, one hidden layer containing
two neurons, and an output neuron. If the inputs are (4, 5), weights from the input
to the hidden layer are [(2, 3), (1, 2)], from the hidden layer to the output neuron
are (1, 1), with all biases set to zero, what is the output with a linear activation
function applied?
○ 14
○ 16
○ 18
○ 20
● An artificial neural network is designed for a regression task with one input
neuron, two hidden neurons in a single layer, and one output neuron. If the input
value is 3, the weight from the input to the hidden neurons are 0.5 and 2, the
weights from the hidden neurons to the output are -1 and 1, and all biases are
zero, using a linear activation function, what is the output of the network?
○ 1.5
○ 3
○ 6
○ 0
● In a three-layer ANN (one hidden layer) where the input consists of two values
(0.5, 0.5), the weights to the hidden layer neurons are [(0.5, 0.5), (0.2, 0.8)], and
the weights from the hidden layer to the output neuron are (0.6, 0.4), with no
biases applied and a linear activation function, what is the output?
○ 0.5
○ 0.6
○ 0.7
○ 0.8
● For a neural network with an input layer consisting of two neurons, a hidden layer
of three neurons, and an output layer with one neuron, where the input values are
(-1, 1), weights from the input to the hidden layer are [(1, -1), (-1, 1), (1, 1)], from
the hidden layer to the output neuron are [1, 2, 3], and all biases are 0, using a
linear activation function, what is the output?
○ 4
○ 6
○ 8
○ 10
● In an ANN with two input neurons, a hidden layer of four neurons, and one output
neuron, if the inputs are (1, 2), the weights from the inputs to the hidden neurons
are [(0.5, 0.5), (1, 1), (1.5, 1.5), (2, 2)], and from the hidden neurons to the output
neuron are [1, 2, 3, 4], with all biases set to zero and a linear activation function,
what is the output of this network?
○ 20
○ 30
○ 40
○ 50
● In a neural network with one input layer containing two neurons, one hidden layer
with three neurons, and one output neuron, if the input values are (3, -3), and all
weights to the hidden layer are set to 1, weights from the hidden layer to the
output neuron are set to (-1, 2, -1), and all biases are zero, using a linear activation
function, what is the output of the network?
○ 0
○ 3
○ -3
○ 6
● Consider a simple ANN consisting of an input layer with four neurons, a hidden
layer with two neurons, and an output layer with one neuron. If the inputs are (1,
0, -1, 2), weights from the input to the hidden layer are [(0.5, -0.5), (1, -1), (-1.5,
1.5), (2, -2)], and from the hidden layer to the output neuron are (2, -1), with all
biases set to one, what is the output using a linear activation function?
○ 1
○ 0
○ 3
○ -2
● An ANN is designed with two input neurons, one hidden layer consisting of two
neurons, and one output neuron. The input values are (2, 1), the weights from the
inputs to the hidden layer are [(0.6, 0.4), (0.4, 0.6)], and the weights from the
hidden layer to the output neuron are [1, -1], with no biases applied. Using a linear
activation function, what is the network's output?
○ 0.2
○ 1.2
○ 2.0
○ 0.8
● In a neural network with three input neurons, a single hidden layer containing four
neurons, and an output layer with one neuron, given the inputs are (-2, 3, 1), the
weights from the input to the hidden layer are [(1, -1, 2), (-2, 1, -1), (3, -2, 3), (1, 2,
-1)], and from the hidden layer to the output neuron are [2, -1, 1, -2], with all biases
set to zero, what is the output using a linear activation function?
○ 4
○ -4
○ 8
○ -8
● For an ANN with two input neurons, one hidden layer containing three neurons,
and one output neuron, where the input values are (0.5, -0.5), weights from the
input to the hidden layer are [(1, -1), (1, 1), (-1, 1)], and from the hidden layer to the
output neuron are [1, -1, 2], with biases in the hidden layer set to (1, -1, 0) and no
bias in the output layer, using a linear activation function, what is the output?
○ 0.5
○ -0.5
○ 1.5
○ -1.5
● In an ANN configuration where there are three input neurons, a hidden layer with
three neurons, and one output neuron, if the inputs are (2, -1, 0), and the weights
to the hidden layer neurons are [(0.5, 0.5, 0.5), (1, 1, 1), (1.5, 1.5, 1.5)], and weights
from the hidden layer to the output neuron are (1, 2, 3), all with zero biases, what
is the output using a linear activation function?
○ 1.5
○ 3
○ 4.5
○ 6

You might also like