Skip to content

Commit 31a75c0

Browse files
Sven Eschlbeckogrisel
andauthored
DOC increase speed in plot_learning_curve.py (#21628)
* Adapted the number of splits * Update plot_learning_curve.py * Update plot_learning_curve.py Added ``random_state=0`` to call of ``learning_curve`` * Update plot_learning_curve.py Added ``shuffle=True`` for ``random_state`` to make an impact * Update examples/model_selection/plot_learning_curve.py Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org> Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
1 parent 6077d52 commit 31a75c0

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

examples/model_selection/plot_learning_curve.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -179,9 +179,9 @@ def plot_learning_curve(
179179
X, y = load_digits(return_X_y=True)
180180

181181
title = "Learning Curves (Naive Bayes)"
182-
# Cross validation with 100 iterations to get smoother mean test and train
182+
# Cross validation with 50 iterations to get smoother mean test and train
183183
# score curves, each time with 20% data randomly selected as a validation set.
184-
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
184+
cv = ShuffleSplit(n_splits=50, test_size=0.2, random_state=0)
185185

186186
estimator = GaussianNB()
187187
plot_learning_curve(
@@ -190,7 +190,7 @@ def plot_learning_curve(
190190

191191
title = r"Learning Curves (SVM, RBF kernel, $\gamma=0.001$)"
192192
# SVC is more expensive so we do a lower number of CV iterations:
193-
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=0)
193+
cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=0)
194194
estimator = SVC(gamma=0.001)
195195
plot_learning_curve(
196196
estimator, title, X, y, axes=axes[:, 1], ylim=(0.7, 1.01), cv=cv, n_jobs=4

0 commit comments

Comments
 (0)