Key

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Set-1 key

Set -1 aptitude

1. Answer: a) 162
2. Answer: b) 30 seconds
3. Answer: a) 25%
4. Answer: a) 24
5. Answer: a) 60 km/h
6. Answer: a) 8 cm
7. Answer: b) 1.5 km/h
8. Answer: b) ₹800
9. Answer: a) 4 days
10. Answer: c) 30%
11. Answer: b) 25%
12. Answer: b) 20
13. Answer: a) ₹3,000 and ₹2,000
14. Answer: a) 28
15. Answer: b) 23

16. Answer: a) 15, 21


17. Answer: c) 8 km/h
18. Answer: b) 51
19. Answer: c) 20% profit
20. Answer: c) 500 m

Set-1 reasoning

1. Answer: c) 28
2. Answer: a) 42
3. Answer: a) 5217
4. Answer: c) 11
5. Answer: d) Sphere
6. Answer: b) 54
7. Answer: b) 1, 4, 3, 2
8. Answer: b) 64
9. Answer: c) 112
10. Answer: d) TERMINAL
11. Answer: b) Nephew
12. Answer: c) Son
13. Answer: a) IJ
14. Answer: c) 39
15. Answer: a) 82.5°
16. Answer: b) 64
17. Answer: a) 64
18. Answer: b) 432
19. Answer: c) M
20. Answer: b) Mother

Set 1:

Probability

1. Answer: a) 1/2
2. Answer: a) 1/4
3. Answer: a) 7/15
4. Answer: b) 4/36
5. Answer: a) 1/221

Statistical Measures

1. Answer: a) 9
2. Answer: a) 9
3. Answer: c) 7
4. Answer: a) 10
5. Answer: b) 2.6

set 1 final

python

1 Answer: b) [1, 2, 3, 4]

2 Answer: b) Filters items based on a condition.


3 Answer: c) define

4 Answer: b) Using lambda keyword

5 Answer: b) Combines elements from multiple iterables into


tuples.

6 Answer: a) 1 3

7 Answer: a) True

8 Answer: b) False

9 Answer: b) my_dict['key']

10 Answer: c) reversed object

ml

1 Answer: a) When the model performs well on training data


but poorly on test data.

2 Answer: c) K-Means Clustering

3 Answer: b) Confusion Matrix

4 Answer: b) Sigmoid

5 Answer: d) All of the above.

6 Answer: d) Both b and c.

7 Answer: b) CNN

8 Answer: b) Learning rate

9 Answer: a) Updating weights in neural networks.

10 Answer: d) AdaBoost
11 Answer: b) Classification performance

12 Answer: b) Random Forest

13 Answer: b) Representing categorical variables

14 Answer: c) K-Nearest Neighbors

15 Answer: a) It controls the size of steps in gradient descent

16 Answer: b) Predict outcomes based on labeled data.

17 Answer: c) Clustering

18 Answer: a) Gradient Descent

19 Answer: b) Pandas

20 Answer: c) True/False predictions and errors

Set-2

Aptitude
1. Answer: b) 12
2. Answer: a) 30 meters
3. Answer: b) 50 km/h
4. Answer: b) ₹18,000
5. Answer: b) 5/14
6. Answer: c) 15 cm
7. Answer: b) 12 days
8. Answer: a) 18.75
9. Answer: c) ₹2,000
10. Answer: b) 8 km/h
11. Answer: b) 25%
12. Answer: a) 20 and 30
13. Answer: b) 24 km/h

14. Answer: a) ₹2,712


15. Answer: b) 90
16. Answer: a) 240 cm

17. Answer: b) ₹5,720


18. Answer: a) ₹2,000
19. Answer: b) 45 km/h
20. Answer: b) 30 years

Reasoning

1. Answer: b) 37
2. Answer: d) 256
3. Answer: c) 37
4. Answer: d) 21
5. Answer: d) 30
6. Answer: a) 81
7. Answer: d) ECAN
8. Answer: c) 28
9. Answer: b) J
10. Answer: b) 16, 4, 8, 18, 5
11. Answer: a) Tuesday
12. Answer: c) Truthful
13. Answer: b) 24
14. Answer: a) 128
15. Answer: b) 72
16. Answer: b) 37
17. Answer: a) E
18. Answer: a) 2
19. Answer: a) 19, 15, 12, 22, 5
20. Answer: a) GBUIFS

Probability

1. Answer: a) 8/20
2. Answer: a) 1/6
3. Answer: a) ¼
4. Answer: a) 1/13
5. Answer: b) 5/20

Statistical Measures

1. Answer: a) 12
2. Answer: a) 5
3. Answer: c) 9
4. Answer: a) 8
5. Answer: d) 4

set 2 final

python

1 Answer: a) [1, 2, 3]

2 Answer: c) pop()

3 Answer: b) {1, 2, 3, 4}
4 Answer: c) Tuple

5 Answer: a) 3

6 Answer: b) Memory addresses of two objects'

7 Answer: b) nohtyP

8 Answer: b) Combine multiple positional arguments into a


tuple

9 Answer: a) [1, 2, 3]

10 Answer: c) len()

ml

1 Answer: b) Normalize data for better algorithm performance

2 Answer: b) PCA

3 Answer: b) SoftMax

4 Answer: a) Analyze training vs. testing performance.

5 Answer: b) Random Forest

6 Answer: a) A form of regularization to prevent overfitting.

7 Answer: b) Number of complete passes over the training


dataset.

8 Answer: b) To evaluate model performance on unseen data.

9 Answer: b) Isolation Forest

10 Answer: a) Minimize the error in predictions.

11 Answer: b) Linear Regression

12 Answer: a) R-Squared

13 Answer: b) K-Nearest Neighbors


14 Answer: c) To reduce overfitting by penalizing large
coefficients.

15 Answer: c) K-Nearest Neighbors

16 Answer: c) Bagging builds models in parallel, while boosting


builds models sequentially.

17 Answer: c) Support Vector Machines

18 Answer: b) It recursively splits the data into subsets based


on feature values.

19 Answer: a) Better performance due to averaging multiple


decision trees.

20 Answer: d) Model weights

You might also like