Skip to content

Commit a65f60e

Browse files
committed
typos [ci skip]
1 parent 81bdc9f commit a65f60e

File tree

5 files changed

+7
-6
lines changed

5 files changed

+7
-6
lines changed

doc/modules/linear_model.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -759,8 +759,8 @@ Multinomial loss "lbfgs", "sag", "saga" or "newton-cg"
759759
Very Large dataset (`n_samples`) "sag" or "saga"
760760
================================= =====================================
761761

762-
The "saga" solver is almost always a the best choice. The "liblinear"
763-
solver is used by default for historical reasons.
762+
The "saga" solver is often the best choice. The "liblinear" solver is
763+
used by default for historical reasons.
764764

765765
For large dataset, you may also consider using :class:`SGDClassifier`
766766
with 'log' loss.

examples/linear_model/plot_sparse_logistic_regression_20newsgroups.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
regression yields more accurate results and is faster to train on the larger
99
scale dataset.
1010
11-
Here we use the l1 sparsity that trims the weights of no to informative
11+
Here we use the l1 sparsity that trims the weights of not informative
1212
features to zero. This is good if the goal is to extract the strongly
1313
discriminative vocabulary of each class. If the goal is to get the best
1414
predictive accuracy, it is better to use the non sparsity-inducing l2 penalty

sklearn/linear_model/logistic.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -977,7 +977,7 @@ class LogisticRegression(BaseEstimator, LinearClassifierMixin,
977977
'sag' and 'lbfgs' solvers support only l2 penalties.
978978
979979
.. versionadded:: 0.19
980-
l1 penalty with SAGA solver (allowing 'multinomial + L1)
980+
l1 penalty with SAGA solver (allowing 'multinomial' + L1)
981981
982982
dual : bool, default: False
983983
Dual or primal formulation. Dual formulation is only implemented for

sklearn/linear_model/sag_fast.pyx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44
#
55
# Authors: Danny Sullivan <dbsullivan23@gmail.com>
66
# Tom Dupre la Tour <tom.dupre-la-tour@m4x.org>
7+
# Arthur Mensch <arthur.mensch@m4x.org
78
#
89
# License: BSD 3 clause
910
cimport numpy as np

sklearn/linear_model/tests/test_logistic.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -896,7 +896,7 @@ def test_logreg_intercept_scaling_zero():
896896

897897

898898
def test_logreg_l1():
899-
# Because liblinear penalizes the intercept and saga does not, we do
899+
# Because liblinear penalizes the intercept and saga does not, we do not
900900
# fit the intercept to make it possible to compare the coefficients of
901901
# the two models at convergence.
902902
rng = np.random.RandomState(42)
@@ -924,7 +924,7 @@ def test_logreg_l1():
924924

925925

926926
def test_logreg_l1_sparse_data():
927-
# Because liblinear penalizes the intercept and saga does not, we do
927+
# Because liblinear penalizes the intercept and saga does not, we do not
928928
# fit the intercept to make it possible to compare the coefficients of
929929
# the two models at convergence.
930930
rng = np.random.RandomState(42)

0 commit comments

Comments
 (0)