Skip to content

Commit d138b0e

Browse files
committed
third
1 parent 9e1a20d commit d138b0e

File tree

1 file changed

+12
-10
lines changed

1 file changed

+12
-10
lines changed

contrib/machine-learning/Random_Forest.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -61,14 +61,15 @@ Hyperparameter tuning can significantly improve the performance of a Random Fore
6161
#### Classification Example
6262
Below is a simple example of using Random Forest for a classification task with the Iris dataset.
6363

64-
'''
64+
```
6565
import numpy as np
6666
import pandas as pd
6767
from sklearn.datasets import load_iris
6868
from sklearn.ensemble import RandomForestClassifier
6969
from sklearn.model_selection import train_test_split
7070
from sklearn.metrics import accuracy_score, classification_report
7171
72+
7273
# Load dataset
7374
iris = load_iris()
7475
X, y = iris.data, iris.target
@@ -90,13 +91,13 @@ accuracy = accuracy_score(y_test, y_pred)
9091
print(f"Accuracy: {accuracy * 100:.2f}%")
9192
print("Classification Report:\n", classification_report(y_test, y_pred))
9293
93-
'''
94+
```
9495

9596
#### Feature Importance
9697
Random Forest provides a way to measure the importance of each feature in making predictions.
9798

9899

99-
'''
100+
```
100101
import matplotlib.pyplot as plt
101102
102103
# Get feature importances
@@ -115,11 +116,11 @@ plt.bar(range(X.shape[1]), importances[indices], align='center')
115116
plt.xticks(range(X.shape[1]), indices)
116117
plt.xlim([-1, X.shape[1]])
117118
plt.show()
118-
'''
119+
```
119120
#### Hyperparameter Tuning
120121
Using Grid Search for hyperparameter tuning.
121122

122-
'''
123+
```
123124
from sklearn.model_selection import GridSearchCV
124125
125126
# Define the parameter grid
@@ -138,11 +139,11 @@ grid_search.fit(X_train, y_train)
138139
139140
# Print the best parameters
140141
print("Best parameters found: ", grid_search.best_params_)
141-
Regression Example
142+
```
143+
#### Regression Example
142144
Below is a simple example of using Random Forest for a regression task with the Boston housing dataset.
143145

144-
python
145-
Copy code
146+
```
146147
import numpy as np
147148
import pandas as pd
148149
from sklearn.datasets import load_boston
@@ -171,10 +172,11 @@ mse = mean_squared_error(y_test, y_pred)
171172
r2 = r2_score(y_test, y_pred)
172173
print(f"Mean Squared Error: {mse:.2f}")
173174
print(f"R^2 Score: {r2:.2f}")
174-
Conclusion
175+
```
176+
### Conclusion
175177
Random Forest is a powerful and flexible machine learning algorithm that can handle both classification and regression tasks. Its ability to create an ensemble of decision trees leads to robust and accurate models. However, it is important to be mindful of the computational cost associated with training multiple trees.
176178

177-
References
179+
### References
178180
Scikit-learn Random Forest Documentation
179181
Wikipedia: Random Forest
180182
Machine Learning Mastery: Introduction to Random Forest

0 commit comments

Comments
 (0)