Skip to content

DOC Replace the phrase "where n_samples in the number" with "where n_samples is the number" #20914

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Sep 2, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions sklearn/compose/_target.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,8 +187,8 @@ def fit(self, X, y, **fit_params):
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
Training vector, where `n_samples` is the number of samples and
`n_features` is the number of features.

y : array-like of shape (n_samples,)
Target values.
Expand Down
10 changes: 5 additions & 5 deletions sklearn/covariance/_empirical_covariance.py
Original file line number Diff line number Diff line change
Expand Up @@ -213,8 +213,8 @@ def fit(self, X, y=None):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and
n_features is the number of features.
Training data, where `n_samples` is the number of samples and
`n_features` is the number of features.

y : Ignored
Not used, present for API consistency by convention.
Expand All @@ -240,9 +240,9 @@ def score(self, X_test, y=None):
Parameters
----------
X_test : array-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is
the number of samples and n_features is the number of features.
X_test is assumed to be drawn from the same distribution than
Test data of which we compute the likelihood, where `n_samples` is
the number of samples and `n_features` is the number of features.
`X_test` is assumed to be drawn from the same distribution than
the data used in fit (including centering).

y : Ignored
Expand Down
4 changes: 2 additions & 2 deletions sklearn/covariance/_shrunk_covariance.py
Original file line number Diff line number Diff line change
Expand Up @@ -157,8 +157,8 @@ def fit(self, X, y=None):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples
and n_features is the number of features.
Training data, where `n_samples` is the number of samples
and `n_features` is the number of features.

y : Ignored
Not used, present for API consistency by convention.
Expand Down
8 changes: 4 additions & 4 deletions sklearn/cross_decomposition/_pls.py
Original file line number Diff line number Diff line change
Expand Up @@ -462,12 +462,12 @@ def fit_transform(self, X, y=None):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and
n_features is the number of predictors.
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of predictors.

y : array-like of shape (n_samples, n_targets), default=None
Target vectors, where n_samples is the number of samples and
n_targets is the number of response variables.
Target vectors, where `n_samples` is the number of samples and
`n_targets` is the number of response variables.

Returns
-------
Expand Down
4 changes: 2 additions & 2 deletions sklearn/datasets/_svmlight_format_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -438,8 +438,8 @@ def dump_svmlight_file(
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.

y : {array-like, sparse matrix}, shape = [n_samples (, n_labels)]
Target values. Class labels must be an
Expand Down
20 changes: 10 additions & 10 deletions sklearn/decomposition/_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,8 +85,8 @@ def fit(self, X, y=None):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and
n_features is the number of features.
Training data, where `n_samples` is the number of samples and
`n_features` is the number of features.

Returns
-------
Expand All @@ -103,14 +103,14 @@ def transform(self, X):
Parameters
----------
X : array-like of shape (n_samples, n_features)
New data, where n_samples is the number of samples
and n_features is the number of features.
New data, where `n_samples` is the number of samples
and `n_features` is the number of features.

Returns
-------
X_new : array-like of shape (n_samples, n_components)
Projection of X in the first principal components, where n_samples
is the number of samples and n_components is the number of the components.
Projection of X in the first principal components, where `n_samples`
is the number of samples and `n_components` is the number of the components.
"""
check_is_fitted(self)

Expand All @@ -130,14 +130,14 @@ def inverse_transform(self, X):
Parameters
----------
X : array-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples
and n_components is the number of components.
New data, where `n_samples` is the number of samples
and `n_components` is the number of components.

Returns
-------
X_original array-like of shape (n_samples, n_features)
Original data, where n_samples is the number of samples
and n_features is the number of features.
Original data, where `n_samples` is the number of samples
and `n_features` is the number of features.

Notes
-----
Expand Down
2 changes: 1 addition & 1 deletion sklearn/decomposition/_dict_learning.py
Original file line number Diff line number Diff line change
Expand Up @@ -1546,7 +1546,7 @@ def fit(self, X, y=None):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where `n_samples` in the number of samples
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.

y : Ignored
Expand Down
24 changes: 12 additions & 12 deletions sklearn/decomposition/_fastica.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,8 +172,8 @@ def fastica(
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
Training vector, where `n_samples` is the number of samples and
`n_features` is the number of features.

n_components : int, default=None
Number of components to extract. If None no dimension reduction
Expand Down Expand Up @@ -464,8 +464,8 @@ def _fit(self, X, compute_sources=False):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples
and n_features is the number of features.
Training data, where `n_samples` is the number of samples
and `n_features` is the number of features.

compute_sources : bool, default=False
If False, sources are not computes but only the rotation matrix.
Expand Down Expand Up @@ -600,8 +600,8 @@ def fit_transform(self, X, y=None):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples
and n_features is the number of features.
Training data, where `n_samples` is the number of samples
and `n_features` is the number of features.

y : Ignored
Not used, present for API consistency by convention.
Expand All @@ -620,8 +620,8 @@ def fit(self, X, y=None):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples
and n_features is the number of features.
Training data, where `n_samples` is the number of samples
and `n_features` is the number of features.

y : Ignored
Not used, present for API consistency by convention.
Expand All @@ -640,8 +640,8 @@ def transform(self, X, copy=True):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Data to transform, where n_samples is the number of samples
and n_features is the number of features.
Data to transform, where `n_samples` is the number of samples
and `n_features` is the number of features.

copy : bool, default=True
If False, data passed to fit can be overwritten. Defaults to True.
Expand All @@ -668,8 +668,8 @@ def inverse_transform(self, X, copy=True):
Parameters
----------
X : array-like of shape (n_samples, n_components)
Sources, where n_samples is the number of samples
and n_components is the number of components.
Sources, where `n_samples` is the number of samples
and `n_components` is the number of components.
copy : bool, default=True
If False, data passed to fit are overwritten. Defaults to True.

Expand Down
12 changes: 6 additions & 6 deletions sklearn/decomposition/_incremental_pca.py
Original file line number Diff line number Diff line change
Expand Up @@ -190,8 +190,8 @@ def fit(self, X, y=None):
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and
n_features is the number of features.
Training data, where `n_samples` is the number of samples and
`n_features` is the number of features.

y : Ignored
Not used, present for API consistency by convention.
Expand Down Expand Up @@ -239,8 +239,8 @@ def partial_fit(self, X, y=None, check_input=True):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and
n_features is the number of features.
Training data, where `n_samples` is the number of samples and
`n_features` is the number of features.

y : Ignored
Not used, present for API consistency by convention.
Expand Down Expand Up @@ -360,8 +360,8 @@ def transform(self, X):
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
New data, where n_samples is the number of samples
and n_features is the number of features.
New data, where `n_samples` is the number of samples
and `n_features` is the number of features.

Returns
-------
Expand Down
8 changes: 4 additions & 4 deletions sklearn/decomposition/_pca.py
Original file line number Diff line number Diff line change
Expand Up @@ -368,8 +368,8 @@ def fit(self, X, y=None):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples
and n_features is the number of features.
Training data, where `n_samples` is the number of samples
and `n_features` is the number of features.

y : Ignored
Ignored.
Expand All @@ -388,8 +388,8 @@ def fit_transform(self, X, y=None):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples
and n_features is the number of features.
Training data, where `n_samples` is the number of samples
and `n_features` is the number of features.

y : Ignored
Ignored.
Expand Down
4 changes: 2 additions & 2 deletions sklearn/decomposition/_sparse_pca.py
Original file line number Diff line number Diff line change
Expand Up @@ -157,8 +157,8 @@ def fit(self, X, y=None):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples
and n_features is the number of features.
Training vector, where `n_samples` is the number of samples
and `n_features` is the number of features.

y : Ignored
Not used, present here for API consistency by convention.
Expand Down
4 changes: 2 additions & 2 deletions sklearn/discriminant_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -828,8 +828,8 @@ def fit(self, X, y):
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
Training vector, where `n_samples` is the number of samples and
`n_features` is the number of features.

y : array-like of shape (n_samples,)
Target values (integers)
Expand Down
20 changes: 10 additions & 10 deletions sklearn/ensemble/_stacking.py
Original file line number Diff line number Diff line change
Expand Up @@ -248,8 +248,8 @@ def predict(self, X, **predict_params):
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.

**predict_params : dict of str -> obj
Parameters to the `predict` called by the `final_estimator`. Note
Expand Down Expand Up @@ -490,8 +490,8 @@ def predict(self, X, **predict_params):
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.

**predict_params : dict of str -> obj
Parameters to the `predict` called by the `final_estimator`. Note
Expand All @@ -515,8 +515,8 @@ def predict_proba(self, X):
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.

Returns
-------
Expand All @@ -535,8 +535,8 @@ def decision_function(self, X):
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.

Returns
-------
Expand Down Expand Up @@ -734,8 +734,8 @@ def fit(self, X, y, sample_weight=None):
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.

y : array-like of shape (n_samples,)
Target values.
Expand Down
12 changes: 6 additions & 6 deletions sklearn/ensemble/_voting.py
Original file line number Diff line number Diff line change
Expand Up @@ -288,8 +288,8 @@ def fit(self, X, y, sample_weight=None):
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.

y : array-like of shape (n_samples,)
Target values.
Expand Down Expand Up @@ -390,8 +390,8 @@ def transform(self, X):
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.

Returns
-------
Expand Down Expand Up @@ -510,8 +510,8 @@ def fit(self, X, y, sample_weight=None):
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Training vectors, where `n_samples` is the number of samples and
`n_features` is the number of features.

y : array-like of shape (n_samples,)
Target values.
Expand Down
Loading