Skip to content

Commit 9acf93e

Browse files
authored
MNT Fix rst issues found by sphinx-lint (#31114)
1 parent 0cf0968 commit 9acf93e

24 files changed

+220
-220
lines changed

doc/developers/contributing.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -292,10 +292,10 @@ how to set up your git repository:
292292

293293
.. code-block:: text
294294
295-
origin git@github.com:YourLogin/scikit-learn.git (fetch)
296-
origin git@github.com:YourLogin/scikit-learn.git (push)
297-
upstream git@github.com:scikit-learn/scikit-learn.git (fetch)
298-
upstream git@github.com:scikit-learn/scikit-learn.git (push)
295+
origin git@github.com:YourLogin/scikit-learn.git (fetch)
296+
origin git@github.com:YourLogin/scikit-learn.git (push)
297+
upstream git@github.com:scikit-learn/scikit-learn.git (fetch)
298+
upstream git@github.com:scikit-learn/scikit-learn.git (push)
299299
300300
You should now have a working installation of scikit-learn, and your git repository
301301
properly configured. It could be useful to run some test to verify your installation.

doc/developers/develop.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -499,7 +499,7 @@ Estimator Tags
499499
The estimator tags are annotations of estimators that allow programmatic inspection of
500500
their capabilities, such as sparse matrix support, supported output types and supported
501501
methods. The estimator tags are an instance of :class:`~sklearn.utils.Tags` returned by
502-
the method :meth:`~sklearn.base.BaseEstimator.__sklearn_tags__()`. These tags are used
502+
the method :meth:`~sklearn.base.BaseEstimator.__sklearn_tags__`. These tags are used
503503
in different places, such as :func:`~base.is_regressor` or the common checks run by
504504
:func:`~sklearn.utils.estimator_checks.check_estimator` and
505505
:func:`~sklearn.utils.estimator_checks.parametrize_with_checks`, where tags determine

doc/modules/cross_validation.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -931,8 +931,8 @@ A note on shuffling
931931
===================
932932

933933
If the data ordering is not arbitrary (e.g. samples with the same class label
934-
are contiguous), shuffling it first may be essential to get a meaningful cross-
935-
validation result. However, the opposite may be true if the samples are not
934+
are contiguous), shuffling it first may be essential to get a meaningful
935+
cross-validation result. However, the opposite may be true if the samples are not
936936
independently and identically distributed. For example, if samples correspond
937937
to news articles, and are ordered by their time of publication, then shuffling
938938
the data will likely lead to a model that is overfit and an inflated validation
@@ -943,8 +943,8 @@ Some cross validation iterators, such as :class:`KFold`, have an inbuilt option
943943
to shuffle the data indices before splitting them. Note that:
944944

945945
* This consumes less memory than shuffling the data directly.
946-
* By default no shuffling occurs, including for the (stratified) K fold cross-
947-
validation performed by specifying ``cv=some_integer`` to
946+
* By default no shuffling occurs, including for the (stratified) K fold
947+
cross-validation performed by specifying ``cv=some_integer`` to
948948
:func:`cross_val_score`, grid search, etc. Keep in mind that
949949
:func:`train_test_split` still returns a random split.
950950
* The ``random_state`` parameter defaults to ``None``, meaning that the

doc/modules/ensemble.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1404,10 +1404,10 @@ calculated as follows:
14041404
================ ========== ========== ==========
14051405
classifier class 1 class 2 class 3
14061406
================ ========== ========== ==========
1407-
classifier 1 w1 * 0.2 w1 * 0.5 w1 * 0.3
1408-
classifier 2 w2 * 0.6 w2 * 0.3 w2 * 0.1
1407+
classifier 1 w1 * 0.2 w1 * 0.5 w1 * 0.3
1408+
classifier 2 w2 * 0.6 w2 * 0.3 w2 * 0.1
14091409
classifier 3 w3 * 0.3 w3 * 0.4 w3 * 0.3
1410-
weighted average 0.37 0.4 0.23
1410+
weighted average 0.37 0.4 0.23
14111411
================ ========== ========== ==========
14121412

14131413
Here, the predicted class label is 2, since it has the highest average probability. See

doc/modules/feature_extraction.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -792,9 +792,9 @@ problems which are currently outside of the scope of scikit-learn.
792792
Vectorizing a large text corpus with the hashing trick
793793
------------------------------------------------------
794794

795-
The above vectorization scheme is simple but the fact that it holds an **in-
796-
memory mapping from the string tokens to the integer feature indices** (the
797-
``vocabulary_`` attribute) causes several **problems when dealing with large
795+
The above vectorization scheme is simple but the fact that it holds an
796+
**in-memory mapping from the string tokens to the integer feature indices**
797+
(the ``vocabulary_`` attribute) causes several **problems when dealing with large
798798
datasets**:
799799

800800
- the larger the corpus, the larger the vocabulary will grow and hence the

doc/modules/lda_qda.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -93,10 +93,10 @@ predicted class is the one that maximises this log-posterior.
9393

9494
.. note:: **Relation with Gaussian Naive Bayes**
9595

96-
If in the QDA model one assumes that the covariance matrices are diagonal,
97-
then the inputs are assumed to be conditionally independent in each class,
98-
and the resulting classifier is equivalent to the Gaussian Naive Bayes
99-
classifier :class:`naive_bayes.GaussianNB`.
96+
If in the QDA model one assumes that the covariance matrices are diagonal,
97+
then the inputs are assumed to be conditionally independent in each class,
98+
and the resulting classifier is equivalent to the Gaussian Naive Bayes
99+
classifier :class:`naive_bayes.GaussianNB`.
100100

101101
LDA
102102
---

doc/modules/model_evaluation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,7 @@ Scoring string name Function
258258
'neg_mean_poisson_deviance' :func:`metrics.mean_poisson_deviance`
259259
'neg_mean_gamma_deviance' :func:`metrics.mean_gamma_deviance`
260260
'neg_mean_absolute_percentage_error' :func:`metrics.mean_absolute_percentage_error`
261-
'd2_absolute_error_score' :func:`metrics.d2_absolute_error_score`
261+
'd2_absolute_error_score' :func:`metrics.d2_absolute_error_score`
262262
==================================== ============================================== ==================================
263263

264264
Usage examples:

doc/modules/multiclass.rst

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -173,12 +173,12 @@ Valid :term:`multiclass` representations for
173173
>>> y_sparse = sparse.csr_matrix(y_dense)
174174
>>> print(y_sparse)
175175
<Compressed Sparse Row sparse matrix of dtype 'int64'
176-
with 4 stored elements and shape (4, 3)>
177-
Coords Values
178-
(0, 0) 1
179-
(1, 2) 1
180-
(2, 0) 1
181-
(3, 1) 1
176+
with 4 stored elements and shape (4, 3)>
177+
Coords Values
178+
(0, 0) 1
179+
(1, 2) 1
180+
(2, 0) 1
181+
(3, 1) 1
182182

183183
For more information about :class:`~sklearn.preprocessing.LabelBinarizer`,
184184
refer to :ref:`preprocessing_targets`.
@@ -384,11 +384,11 @@ An example of the same ``y`` in sparse matrix form:
384384
>>> print(y_sparse)
385385
<Compressed Sparse Row sparse matrix of dtype 'int64'
386386
with 4 stored elements and shape (3, 4)>
387-
Coords Values
388-
(0, 0) 1
389-
(0, 3) 1
390-
(1, 2) 1
391-
(1, 3) 1
387+
Coords Values
388+
(0, 0) 1
389+
(0, 3) 1
390+
(1, 2) 1
391+
(1, 3) 1
392392

393393
.. _multioutputclassfier:
394394

doc/related_projects.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ enhance the functionality of scikit-learn's estimators.
7474
- `dtreeviz <https://github.com/parrt/dtreeviz/>`_ A Python library for
7575
decision tree visualization and model interpretation.
7676

77-
- `model-diagnostics <https://lorentzenchr.github.io/model-diagnostics/>` Tools for
77+
- `model-diagnostics <https://lorentzenchr.github.io/model-diagnostics/>`_ Tools for
7878
diagnostics and assessment of (machine learning) models (in Python).
7979

8080
- `sklearn-evaluation <https://github.com/ploomber/sklearn-evaluation>`_

0 commit comments

Comments
 (0)