Skip to content

DOC Fix A to uppercase in See Also docstring section #18332

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Sep 4, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions doc/developers/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -689,12 +689,12 @@ opposed to how it works "under the hood".

Finally, follow the formatting rules below to make it consistently good:

* Add "See also" in docstrings for related classes/functions.
* Add "See Also" in docstrings for related classes/functions.

* "See also" in docstrings should be one line per reference,
* "See Also" in docstrings should be one line per reference,
with a colon and an explanation, for example::

See also
See Also
--------
SelectKBest : Select features based on the k highest scores.
SelectFpr : Select features based on a false positive rate test.
Expand Down
8 changes: 4 additions & 4 deletions doc/modules/compose.rst
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ or by name::
* :ref:`sphx_glr_auto_examples_svm_plot_svm_anova.py`
* :ref:`sphx_glr_auto_examples_compose_plot_compare_reduction.py`

.. topic:: See also:
.. topic:: See Also:

* :ref:`composite_grid_search`

Expand Down Expand Up @@ -457,7 +457,7 @@ to specify the column as a list of strings (``['city']``).

Apart from a scalar or a single item list, the column selection can be specified
as a list of multiple items, an integer array, a slice, a boolean mask, or
with a :func:`~sklearn.compose.make_column_selector`. The
with a :func:`~sklearn.compose.make_column_selector`. The
:func:`~sklearn.compose.make_column_selector` is used to select columns based
on data type or column name::

Expand Down Expand Up @@ -542,8 +542,8 @@ many estimators. This visualization is activated by setting the
>>> # diplays HTML representation in a jupyter context
>>> column_trans # doctest: +SKIP

An example of the HTML output can be seen in the
**HTML representation of Pipeline** section of
An example of the HTML output can be seen in the
**HTML representation of Pipeline** section of
:ref:`sphx_glr_auto_examples_compose_plot_column_transformer_mixed_types.py`.
As an alternative, the HTML can be written to a file using
:func:`~sklearn.utils.estimator_html_repr`::
Expand Down
12 changes: 6 additions & 6 deletions sklearn/_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ def get_config():

See Also
--------
config_context: Context manager for global scikit-learn configuration
set_config: Set global scikit-learn configuration
config_context : Context manager for global scikit-learn configuration.
set_config : Set global scikit-learn configuration.
"""
return _global_config.copy()

Expand Down Expand Up @@ -69,8 +69,8 @@ def set_config(assume_finite=None, working_memory=None,

See Also
--------
config_context: Context manager for global scikit-learn configuration
get_config: Retrieve current values of the global configuration
config_context : Context manager for global scikit-learn configuration.
get_config : Retrieve current values of the global configuration.
"""
if assume_finite is not None:
_global_config['assume_finite'] = assume_finite
Expand Down Expand Up @@ -138,8 +138,8 @@ def config_context(**new_config):

See Also
--------
set_config: Set global scikit-learn configuration
get_config: Retrieve current values of the global configuration
set_config : Set global scikit-learn configuration.
get_config : Retrieve current values of the global configuration.
"""
old_config = get_config().copy()
set_config(**new_config)
Expand Down
2 changes: 1 addition & 1 deletion sklearn/calibration.py
Original file line number Diff line number Diff line change
Expand Up @@ -374,7 +374,7 @@ class _CalibratedClassifier:
if None, then classes is extracted from the given target values
in fit().

See also
See Also
--------
CalibratedClassifierCV

Expand Down
4 changes: 2 additions & 2 deletions sklearn/cluster/_agglomerative.py
Original file line number Diff line number Diff line change
Expand Up @@ -416,9 +416,9 @@ def linkage_tree(X, connectivity=None, n_clusters=None, linkage='complete',
distances[i] refers to the distance between children[i][0] and
children[i][1] when they are merged.

See also
See Also
--------
ward_tree : hierarchical clustering with ward linkage
ward_tree : Hierarchical clustering with ward linkage.
"""
X = np.asarray(X)
if X.ndim == 1:
Expand Down
4 changes: 1 addition & 3 deletions sklearn/cluster/_birch.py
Original file line number Diff line number Diff line change
Expand Up @@ -394,9 +394,7 @@ class Birch(ClusterMixin, TransformerMixin, BaseEstimator):

See Also
--------

MiniBatchKMeans
Alternative implementation that does incremental updates
MiniBatchKMeans : Alternative implementation that does incremental updates
of the centers' positions using mini-batches.

Notes
Expand Down
15 changes: 6 additions & 9 deletions sklearn/cluster/_dbscan.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,13 +97,11 @@ def dbscan(X, eps=0.5, *, min_samples=5, metric='minkowski',
labels : ndarray of shape (n_samples,)
Cluster labels for each point. Noisy samples are given the label -1.

See also
See Also
--------
DBSCAN
An estimator interface for this clustering algorithm.
OPTICS
A similar estimator interface clustering at multiple values of eps. Our
implementation is optimized for memory usage.
DBSCAN : An estimator interface for this clustering algorithm.
OPTICS : A similar estimator interface clustering at multiple values of
eps. Our implementation is optimized for memory usage.

Notes
-----
Expand Down Expand Up @@ -232,10 +230,9 @@ class DBSCAN(ClusterMixin, BaseEstimator):
>>> clustering
DBSCAN(eps=3, min_samples=2)

See also
See Also
--------
OPTICS
A similar clustering at multiple values of eps. Our implementation
OPTICS : A similar clustering at multiple values of eps. Our implementation
is optimized for memory usage.

Notes
Expand Down
11 changes: 4 additions & 7 deletions sklearn/cluster/_kmeans.py
Original file line number Diff line number Diff line change
Expand Up @@ -714,12 +714,10 @@ class KMeans(TransformerMixin, ClusterMixin, BaseEstimator):
n_iter_ : int
Number of iterations run.

See also
See Also
--------

MiniBatchKMeans
Alternative online implementation that does incremental updates
of the centers positions using mini-batches.
MiniBatchKMeans : Alternative online implementation that does incremental
updates of the centers positions using mini-batches.
For large scale learning (say n_samples > 10k) MiniBatchKMeans is
probably much faster than the default batch implementation.

Expand Down Expand Up @@ -1497,8 +1495,7 @@ class MiniBatchKMeans(KMeans):

See Also
--------
KMeans
The classic implementation of the clustering method based on the
KMeans : The classic implementation of the clustering method based on the
Lloyd's algorithm. It consumes the whole set of input data at each
iteration.

Expand Down
3 changes: 1 addition & 2 deletions sklearn/cluster/_optics.py
Original file line number Diff line number Diff line change
Expand Up @@ -179,8 +179,7 @@ class OPTICS(ClusterMixin, BaseEstimator):

See Also
--------
DBSCAN
A similar clustering for a specified neighborhood radius (eps).
DBSCAN : A similar clustering for a specified neighborhood radius (eps).
Our implementation is optimized for runtime.

References
Expand Down
14 changes: 7 additions & 7 deletions sklearn/compose/_column_transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -145,12 +145,12 @@ class ColumnTransformer(TransformerMixin, _BaseComposition):
in the `passthrough` keyword. Those columns specified with `passthrough`
are added at the right to the output of the transformers.

See also
See Also
--------
sklearn.compose.make_column_transformer : convenience function for
make_column_transformer : Convenience function for
combining the outputs of multiple transformer objects applied to
column subsets of the original feature space.
sklearn.compose.make_column_selector : convenience function for selecting
make_column_selector : Convenience function for selecting
columns based on datatype or the columns name with a regex pattern.

Examples
Expand Down Expand Up @@ -772,9 +772,9 @@ def make_column_transformer(*transformers,
-------
ct : ColumnTransformer

See also
See Also
--------
sklearn.compose.ColumnTransformer : Class that allows combining the
ColumnTransformer : Class that allows combining the
outputs of multiple transformer objects used on column subsets
of the data into a single feature space.

Expand Down Expand Up @@ -838,9 +838,9 @@ class make_column_selector:
Callable for column selection to be used by a
:class:`ColumnTransformer`.

See also
See Also
--------
sklearn.compose.ColumnTransformer : Class that allows combining the
ColumnTransformer : Class that allows combining the
outputs of multiple transformer objects used on column subsets
of the data into a single feature space.

Expand Down
6 changes: 3 additions & 3 deletions sklearn/cross_decomposition/_pls.py
Original file line number Diff line number Diff line change
Expand Up @@ -632,7 +632,7 @@ class PLSCanonical(_PLS):
PLSCanonical()
>>> X_c, Y_c = plsca.transform(X, Y)

See also
See Also
--------
CCA
PLSSVD
Expand Down Expand Up @@ -742,7 +742,7 @@ class CCA(_UnstableArchMixin, _PLS):
CCA(n_components=1)
>>> X_c, Y_c = cca.transform(X, Y)

See also
See Also
--------
PLSCanonical
PLSSVD
Expand Down Expand Up @@ -824,7 +824,7 @@ class PLSSVD(TransformerMixin, BaseEstimator):
>>> X_c.shape, Y_c.shape
((4, 2), (4, 2))

See also
See Also
--------
PLSCanonical
CCA
Expand Down
22 changes: 11 additions & 11 deletions sklearn/datasets/_samples_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -150,10 +150,10 @@ def make_classification(n_samples=100, n_features=20, *, n_informative=2,
.. [1] I. Guyon, "Design of experiments for the NIPS 2003 variable
selection benchmark", 2003.

See also
See Also
--------
make_blobs: simplified variant
make_multilabel_classification: unrelated generator for multilabel tasks
make_blobs : Simplified variant.
make_multilabel_classification : Unrelated generator for multilabel tasks.
"""
generator = check_random_state(random_state)

Expand Down Expand Up @@ -461,9 +461,9 @@ def make_hastie_10_2(n_samples=12000, *, random_state=None):
.. [1] T. Hastie, R. Tibshirani and J. Friedman, "Elements of Statistical
Learning Ed. 2", Springer, 2009.

See also
See Also
--------
make_gaussian_quantiles: a generalization of this dataset approach
make_gaussian_quantiles : A generalization of this dataset approach.
"""
rs = check_random_state(random_state)

Expand Down Expand Up @@ -816,9 +816,9 @@ def make_blobs(n_samples=100, n_features=2, *, centers=None, cluster_std=1.0,
>>> y
array([0, 1, 2, 0, 2, 2, 2, 1, 1, 0])

See also
See Also
--------
make_classification: a more intricate variant
make_classification : A more intricate variant.
"""
generator = check_random_state(random_state)

Expand Down Expand Up @@ -1301,7 +1301,7 @@ def make_spd_matrix(n_dim, *, random_state=None):
X : ndarray of shape (n_dim, n_dim)
The random symmetric, positive-definite matrix.

See also
See Also
--------
make_sparse_spd_matrix
"""
Expand Down Expand Up @@ -1357,7 +1357,7 @@ def make_sparse_spd_matrix(dim=1, *, alpha=0.95, norm_diag=False,
Thus alpha does not translate directly into the filling fraction of
the matrix itself.

See also
See Also
--------
make_spd_matrix
"""
Expand Down Expand Up @@ -1633,7 +1633,7 @@ def make_biclusters(shape, n_clusters, *, noise=0.0, minval=10,
of the seventh ACM SIGKDD international conference on Knowledge
discovery and data mining (pp. 269-274). ACM.

See also
See Also
--------
make_checkerboard
"""
Expand Down Expand Up @@ -1725,7 +1725,7 @@ def make_checkerboard(shape, n_clusters, *, noise=0.0, minval=10,
Spectral biclustering of microarray data: coclustering genes
and conditions. Genome research, 13(4), 703-716.

See also
See Also
--------
make_biclusters
"""
Expand Down
9 changes: 4 additions & 5 deletions sklearn/datasets/_svmlight_format_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -132,11 +132,10 @@ def load_svmlight_file(f, *, n_features=None, dtype=np.float64,
query_id for each sample. Only returned when query_id is set to
True.

See also
See Also
--------
load_svmlight_files: similar function for loading multiple files in this
format, enforcing the same number of features/columns
on all of them.
load_svmlight_files : Similar function for loading multiple files in this
format, enforcing the same number of features/columns on all of them.

Examples
--------
Expand Down Expand Up @@ -287,7 +286,7 @@ def load_svmlight_files(files, *, n_features=None, dtype=np.float64,
number of features (X_train.shape[1] == X_test.shape[1]). This may not
be the case if you load the files individually with load_svmlight_file.

See also
See Also
--------
load_svmlight_file
"""
Expand Down
4 changes: 2 additions & 2 deletions sklearn/decomposition/_dict_learning.py
Original file line number Diff line number Diff line change
Expand Up @@ -1285,7 +1285,7 @@ class DictionaryLearning(_BaseSparseCoding, BaseEstimator):
J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009: Online dictionary learning
for sparse coding (https://www.di.ens.fr/sierra/pdfs/icml09.pdf)

See also
See Also
--------
SparseCoder
MiniBatchDictionaryLearning
Expand Down Expand Up @@ -1525,7 +1525,7 @@ class MiniBatchDictionaryLearning(_BaseSparseCoding, BaseEstimator):
J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009: Online dictionary learning
for sparse coding (https://www.di.ens.fr/sierra/pdfs/icml09.pdf)

See also
See Also
--------
SparseCoder
DictionaryLearning
Expand Down
2 changes: 1 addition & 1 deletion sklearn/decomposition/_factor_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ class FactorAnalysis(TransformerMixin, BaseEstimator):
.. Christopher M. Bishop: Pattern Recognition and Machine Learning,
Chapter 12.2.4

See also
See Also
--------
PCA: Principal component analysis is also a latent linear variable model
which however assumes equal noise variance for each feature.
Expand Down
2 changes: 1 addition & 1 deletion sklearn/decomposition/_incremental_pca.py
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ class IncrementalPCA(_BasePCA):
G. Golub and C. Van Loan. Matrix Computations, Third Edition, Chapter 5,
Section 5.4.4, pp. 252-253.

See also
See Also
--------
PCA
KernelPCA
Expand Down
4 changes: 2 additions & 2 deletions sklearn/decomposition/_sparse_pca.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ class SparsePCA(TransformerMixin, BaseEstimator):
>>> np.mean(transformer.components_ == 0)
0.9666...

See also
See Also
--------
PCA
MiniBatchSparsePCA
Expand Down Expand Up @@ -296,7 +296,7 @@ class MiniBatchSparsePCA(SparsePCA):
>>> np.mean(transformer.components_ == 0)
0.94

See also
See Also
--------
PCA
SparsePCA
Expand Down
Loading