classification algorithms
classification algorithms
A) 1
B) 2
C) 1 and 2
D) None of these
Solution: C
Both options are true. In Bagging, each individual trees are independent of each other because
A) 1
B) 2
C) 1 and 2
D) None of these
Solution: B
In boosting tree individual weak learners are not independent of each other because each tree
correct the results of previous tree. Bagging and boosting both can be consider as improving the
3) Which of the following is/are true about Random Forest and Gradient Boosting
ensemble methods?
A) 1
B) 2
C) 3
D) 4
E) 1 and 4
Solution: E
aggregate the results of these tree. Which of the following is true about individual(Tk) tree
in Random Forest?
A) 1 and 3
B) 1 and 4
C) 2 and 3
D) 2 and 4
Solution: A
Random forest is based on bagging concept, that consider faction of sample and faction of
Boosting?
A) 1 and 3
B) 1 and 4
C) 2 and 3
D) 2 and 4
Solution: A
Increase the depth from the certain value of depth may overfit the data and for 2 depth values
validation accuracies are same we always prefer the small depth in final model building.
6) Which of the following algorithm doesn’t uses learning Rate as of one of its
hyperparameter?
1. Gradient Boosting
2. Extra Trees
3. AdaBoost
4. Random Forest
A) 1 and 3
B) 1 and 4
C) 2 and 3
D) 2 and 4
Solution: D
Random Forest and Extra Trees don’t have learning rate as a hyperparameter.
7) Which of the following algorithm would you take into the consideration in your final
Suppose you have given the following graph which shows the ROC curve for two different
A) Random Forest
B) Logistic Regression
D) None of these
Solution: A
Since, Random forest has largest AUC given in the picture so I would prefer Random Forest
8) Which of the following is true about training and testing error in such case?
Suppose you want to apply AdaBoost algorithm on Data D which has T observations. You set
half the data for training and half for testing initially. Now you want to increase the number of
data points for training T1, T2 … Tn where T1 < T2…. Tn-1 < Tn.
A) The difference between training error and test error increases as number of observations
increases
B) The difference between training error and test error decreases as number of observations
increases
C) The difference between training error and test error will not change
D) None of These
Solution: B
As we have more and more data, training error increases and testing error de-creases. And they
9) In random forest or gradient boosting algorithms, features can be of any type. For
B) Only Gradient boosting algorithm handles real valued attributes by discretizing them
D) None of these
Solution: C
10) Which of the following algorithm are not an example of ensemble learning algorithm?
A) Random Forest
B) Adaboost
C) Extra Trees
D) Gradient Boosting
E) Decision Trees
Solution: E
Decision trees doesn’t aggregate the results of multiple trees so it is not an ensemble algorithm.
11) Suppose you are using a bagging based algorithm say a RandomForest in model
B) 2
C) 1 and 2
D) None of these
Solution: A
Since Random Forest aggregate the result of different weak learners, If It is possible we would
want more number of trees in model building. Random Forest is a black box model you will lose
Context 12-15
Consider the following figure for answering the next few questions. In the figure, X1 and X2 are
the two features and the data point is represented by dots (-1 is negative class and +1 is a positive
class). And you first split the data based on feature X1(say splitting point is x11) which is shown
in the figure using vertical line. Every value less than x11 will be predicted as positive class and
A) 1
B) 2
C) 3
D) 4
Solution: A
Only one observation is misclassified, one negative class is showing at the left side of vertical
13) Which of the following splitting point on feature x1 will classify the data correctly?
C) Equal to x11
D) None of above
Solution: D
If you search any point on X1 you won’t find any point that gives 100% accuracy.
14) If you consider only feature X2 for splitting. Can you now perfectly separate the
positive class from negative class for any one split on X2?
A) Yes
B) No
Solution: B
15) Now consider only one splitting on both (one on X1 and one on X2) feature. You can
split both features at any point. Would you be able to classify all data points correctly?
A) TRUE
B) FALSE
Solution: B
You won’t find such case because you can get minimum 1 misclassification.
Context 16-17
Suppose, you are working on a binary classification problem with 3 input features. And you
chose to apply a bagging algorithm(X) on this data. You chose max_features = 2 and the
n_estimators =3. Now, Think that each estimators have 70% accuracy.
Note: Algorithm X is aggregating the results of individual estimators based on maximum voting
A) 70%
B) 80%
C) 90%
D) 100%
Solution: D
D) None of these
Solution: C
has highest information gain. In the below image, select the attribute which has the highest
information gain?
A) Outlook
B) Humidity
C) Windy
D) Temperature
Solution: A
Information gain increases with the average purity of subsets. So option A would be the right
answer.
19) Which of the following is true about the Gradient Boosting trees?
2. We can use gradient decent method for minimize the loss function
A) 1
B) 2
C) 1 and 2
D) None of these
Solution: C
20) True-False: The bagging is suitable for high variance low bias models?
A) TRUE
B) FALSE
Solution: A
The bagging is suitable for high variance low bias models or you can say for complex models.
21) Which of the following is true when you choose fraction of observations for building the
B) Decrease the fraction of samples to build a base learners will result in increase in variance
C) Increase the fraction of samples to build a base learners will result in decrease in variance
D) Increase the fraction of samples to build a base learners will result in Increase in variance
Solution: A
Context 22-23
Suppose, you are building a Gradient Boosting model on data, which has millions of
observations and 1000’s of features. Before building the model you want to consider the
22) Consider the hyperparameter “number of trees” and arrange the options in terms of
time taken by each hyperparameter for building the Gradient Boosting model?
A) 1~2~3
B) 1<2<3
C) 1>2>3
D) None of these
Solution: B
The time taken by building 1000 trees is maximum and time taken by building the 100 trees is
23) Now, Consider the learning rate hyperparameter and arrange the options in terms of
time taken by each hyperparameter for building the Gradient boosting model?
1. learning rate = 1
2. learning rate = 2
3. learning rate = 3
A) 1~2~3
B) 1<2<3
C) 1>2>3
D) None of these
Solution: A
Since learning rate doesn’t affect time so all learning rates would take equal time.
24) In greadient boosting it is important use learning rate to get optimum output. Which of
Solution: C
Learning rate should be low but it should not be very low otherwise algorithm will take so long
to finish the training because you need to increase the number trees.
25) [True or False] Cross validation can be used to select the number of iterations in
A) TRUE
B) FALSE
Solution: A
26) When you use the boosting algorithm you always consider the weak learners. Which of
1. To prevent overfitting
B) 2
C) 1 and 2
D) None of these
Solution: A
To prevent overfitting, since the complexity of the overall learner increases at each step. Starting
with weak learners implies the final classifier will be less likely to overfit.
27) To apply bagging to regression trees which of the following is/are true in such case?
A) 1 and 2
B) 2 and 3
C) 1 and 3
D) 1,2 and 3
Solution: D
C) Both of these
D) None of these
Solution: B
We always consider the validation results to compare with the test result.
29) In which of the following scenario a gain ratio is preferred over Information Gain?
D) None of these
Solution: A
When high cardinality problems, gain ratio is preferred over Information Gain technique.
30) Suppose you have given the following scenario for training and validation error for
Gradient Boosting. Which of the following hyper parameter would you choose in such
case?
A) 1
B) 2
C) 3
D) 4
Solution: B
Scenario 2 and 4 has same validation accuracies but we would select 2 because depth is lower is