Skip to content

Commit 8c2ff61

Browse files
committed
Add 51-55 questions
1 parent 44b8f96 commit 8c2ff61

File tree

1 file changed

+15
-2
lines changed

1 file changed

+15
-2
lines changed

README.md

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
# Computer Vision Interview Questions
2-
A collection of technical interview questions for computer vision engineering positions.
1+
# Machine Learning Interview Questions
2+
A collection of technical interview questions for machine learning and computer vision engineering positions.
33

44
#### 1) What's the trade-off between bias and variance? [[src](http://houseofbots.com/news-detail/2849-4-data-science-and-machine-learning-interview-questions)]
55

@@ -217,6 +217,19 @@ Auto encoder is basically used to learn a compressed form of given data. Few app
217217
- Generator
218218
- Discriminator
219219

220+
#### 52) What's the difference between boosting and bagging?
221+
Boosting and bagging are similar, in that they are both ensembling techniques, where a number of weak learners (classifiers/regressors that are barely better than guessing) combine (through averaging or max vote) to create a strong learner that can make accurate predictions. Bagging means that you take bootstrap samples (with replacement) of your data set and each sample trains a (potentially) weak learner. Boosting, on the other hand, uses all data to train each learner, but instances that were misclassified by the previous learners are given more weight so that subsequent learners give more focus to them during training. [[src]](https://www.quora.com/Whats-the-difference-between-boosting-and-bagging)
222+
223+
#### 53) Explain how a ROC curve works. [[src]](https://www.springboard.com/blog/machine-learning-interview-questions/)
224+
The ROC curve is a graphical representation of the contrast between true positive rates and the false positive rate at various thresholds. It’s often used as a proxy for the trade-off between the sensitivity of the model (true positives) vs the fall-out or the probability it will trigger a false alarm (false positives).
225+
226+
#### 54) What’s the difference between Type I and Type II error? [[src]](https://www.springboard.com/blog/machine-learning-interview-questions/)
227+
Type I error is a false positive, while Type II error is a false negative. Briefly stated, Type I error means claiming something has happened when it hasn’t, while Type II error means that you claim nothing is happening when in fact something is.
228+
A clever way to think about this is to think of Type I error as telling a man he is pregnant, while Type II error means you tell a pregnant woman she isn’t carrying a baby.
229+
230+
#### 55) What’s the difference between a generative and discriminative model? [src]](https://www.springboard.com/blog/machine-learning-interview-questions/)
231+
A generative model will learn categories of data while a discriminative model will simply learn the distinction between different categories of data. Discriminative models will generally outperform generative models on classification tasks.
232+
220233
## Contributions
221234
Contributions are most welcomed.
222235
1. Fork the repository.

0 commit comments

Comments
 (0)