Skip to content

Commit 3a9db50

Browse files
authored
Merge pull request animator#1261 from revanth1718/main
XGBoost Topic
2 parents 30d2944 + da0d7f9 commit 3a9db50

File tree

3 files changed

+93
-0
lines changed

3 files changed

+93
-0
lines changed
15.9 KB
Binary file not shown.

contrib/machine-learning/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,3 +25,4 @@
2525
- [Naive Bayes](naive-bayes.md)
2626
- [Neural network regression](neural-network-regression.md)
2727
- [PyTorch Fundamentals](pytorch-fundamentals.md)
28+
- [Xgboost](xgboost.md)

contrib/machine-learning/xgboost.md

Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
# XGBoost
2+
XGBoost is an implementation of gradient boosted decision trees designed for speed and performance.
3+
4+
## Introduction to Gradient Boosting
5+
Gradient boosting is a powerful technique for building predictive models that has seen widespread success in various applications.
6+
- **Boosting Concept**: Boosting originated from the idea of modifying weak learners to improve their predictive capability.
7+
- **AdaBoost**: The first successful boosting algorithm was Adaptive Boosting (AdaBoost), which utilizes decision stumps as weak learners.
8+
- **Gradient Boosting Machines (GBM)**: AdaBoost and related algorithms were later reformulated as Gradient Boosting Machines, casting boosting as a numerical optimization problem.
9+
- **Algorithm Elements**:
10+
- _Loss function_: Determines the objective to minimize (e.g., cross-entropy for classification, mean squared error for regression).
11+
- _Weak learner_: Typically, decision trees are used as weak learners.
12+
- _Additive model_: New weak learners are added iteratively to minimize the loss function, correcting the errors of previous models.
13+
14+
## Introduction to XGBoost
15+
- eXtreme Gradient Boosting (XBGoost): a more **regularized form** of Gradient Boosting, as it uses **advanced regularization (L1&L2)**, improving the model’s **generalization capabilities.**
16+
- It’s suitable when there is **a large number of training samples and a small number of features**; or when there is **a mixture of categorical and numerical features**.
17+
- **Development**: Created by Tianqi Chen, XGBoost is designed for computational speed and model performance.
18+
- **Key Features**:
19+
- _Speed_: Achieved through careful engineering, including parallelization of tree construction, distributed computing, and cache optimization.
20+
- _Support for Variations_: XGBoost supports various techniques and optimizations.
21+
- _Out-of-Core Computing_: Can handle very large datasets that don't fit into memory.
22+
- **Advantages**:
23+
- _Sparse Optimization_: Suitable for datasets with many zero values.
24+
- _Regularization_: Implements advanced regularization techniques (L1 and L2), enhancing generalization capabilities.
25+
- _Parallel Training_: Utilizes all CPU cores during training for faster processing.
26+
- _Multiple Loss Functions_: Supports different loss functions based on the problem type.
27+
- _Bagging and Early Stopping_: Additional techniques for improving performance and efficiency.
28+
- **Pre-Sorted Decision Tree Algorithm**:
29+
1. Features are pre-sorted by their values.
30+
2. Traversing segmentation points involves finding the best split point on a feature with a cost of O(#data).
31+
3. Data is split into left and right child nodes after finding the split point.
32+
4. Pre-sorting allows for accurate split point determination.
33+
- **Limitations**:
34+
1. Iterative Traversal: Each iteration requires traversing the entire training data multiple times.
35+
2. Memory Consumption: Loading the entire training data into memory limits size, while not loading it leads to time-consuming read/write operations.
36+
3. Space Consumption: Pre-sorting consumes space, storing feature sorting results and split gain calculations.
37+
XGBoosting:
38+
![image](assets/XG_1.webp)
39+
40+
## Develop Your First XGBoost Model
41+
This code uses the XGBoost library to train a model on the Iris dataset, splitting the data, setting hyperparameters, training the model, making predictions, and evaluating accuracy, achieving an accuracy score of X on the testing set.
42+
43+
```python
44+
# XGBoost with Iris Dataset
45+
# Importing necessary libraries
46+
import numpy as np
47+
import xgboost as xgb
48+
from sklearn.datasets import load_iris
49+
from sklearn.model_selection import train_test_split
50+
from sklearn.metrics import accuracy_score
51+
52+
# Loading a sample dataset (Iris dataset)
53+
data = load_iris()
54+
X = data.data
55+
y = data.target
56+
57+
# Splitting the dataset into training and testing sets
58+
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
59+
60+
# Converting the dataset into DMatrix format
61+
dtrain = xgb.DMatrix(X_train, label=y_train)
62+
dtest = xgb.DMatrix(X_test, label=y_test)
63+
64+
# Setting hyperparameters for XGBoost
65+
params = {
66+
'max_depth': 3,
67+
'eta': 0.1,
68+
'objective': 'multi:softmax',
69+
'num_class': 3
70+
}
71+
72+
# Training the XGBoost model
73+
num_round = 50
74+
model = xgb.train(params, dtrain, num_round)
75+
76+
# Making predictions on the testing set
77+
y_pred = model.predict(dtest)
78+
79+
# Evaluating the model
80+
accuracy = accuracy_score(y_test, y_pred)
81+
print("Accuracy:", accuracy)
82+
```
83+
84+
### Output
85+
86+
Accuracy: 1.0
87+
88+
## **Conclusion**
89+
XGBoost's focus on speed, performance, and scalability has made it one of the most widely used and powerful predictive modeling algorithms available. Its ability to handle large datasets efficiently, along with its advanced features and optimizations, makes it a valuable tool in machine learning and data science.
90+
91+
## Reference
92+
- [Machine Learning Prediction of Turning Precision Using Optimized XGBoost Model](https://www.mdpi.com/2076-3417/12/15/7739)

0 commit comments

Comments
 (0)