Machine Learning
Machine Learning
Machine Learning
where:
24.
25. Bagging
● Bagging, or Bootstrap Aggregating, is a technique that aims to reduce variance
and avoid overfitting. Here's how it works:
1. Data Sampling: Multiple subsets of the training data are created by random sampling
with replacement (bootstrap sampling).
2. Model Training: A separate model (usually the same type) is trained on each subset
independently.
3. Prediction Aggregation: For classification, the final output is typically decided by
majority voting among models; for regression, it’s the average prediction.
Example Algorithm: Random Forest is a popular bagging method that uses multiple
decision trees to improve accuracy and stability.
Boosting
Boosting focuses on improving model accuracy by correcting errors of previous models,
aiming to reduce both bias and variance. Here’s the process:
1. Sequential Learning: Models are trained sequentially, with each new model
attempting to correct the errors made by the previous one.
2. Weighted Samples: Incorrectly predicted samples are given higher weights, making
the next model focus on these harder-to-classify examples.
3. Weighted Prediction: The final prediction is often a weighted sum of the individual
models’ predictions.
The kernel trick is a method used to implicitly map the input features into a
higher-dimensional space without explicitly calculating the transformation. It allows SVMs
to efficiently handle non-linear decision boundaries by computing the dot product in the
higher-dimensional space using a kernel function. This makes SVMs versatile and powerful
in capturing complex patterns in the data.
● Linear Kernel
● Polynomial Kernel
● Radial Basis function Kernel
● The Process of organising the objects into a group whose members are similar in
some ways.