Skip to content

DOC add missing attributes in several modules #15521

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jun 18, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 25 additions & 19 deletions sklearn/feature_selection/_rfe.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ class RFE(SelectorMixin, MetaEstimatorMixin, BaseEstimator):

Parameters
----------
estimator : object
estimator : estimator instance
A supervised learning estimator with a ``fit`` method that provides
information about feature importance
(e.g. `coef_`, `feature_importances_`).
Expand Down Expand Up @@ -89,19 +89,22 @@ class RFE(SelectorMixin, MetaEstimatorMixin, BaseEstimator):

Attributes
----------
classes_ : ndarray of shape (n_classes,)
Unique class labels.

estimator_ : estimator instance
The fitted estimator used to select features.

n_features_ : int
The number of selected features.

support_ : array of shape [n_features]
The mask of selected features.

ranking_ : array of shape [n_features]
ranking_ : ndarray of shape (n_features,)
The feature ranking, such that ``ranking_[i]`` corresponds to the
ranking position of the i-th feature. Selected (i.e., estimated
best) features are assigned rank 1.

estimator_ : object
The external estimator fit on the reduced dataset.
support_ : ndarray of shape (n_features,)
The mask of selected features.

Examples
--------
Expand Down Expand Up @@ -363,7 +366,7 @@ class RFECV(RFE):

Parameters
----------
estimator : object
estimator : estimator instance
A supervised learning estimator with a ``fit`` method that provides
information about feature importance either through a ``coef_``
attribute or through a ``feature_importances_`` attribute.
Expand Down Expand Up @@ -439,26 +442,29 @@ class RFECV(RFE):

Attributes
----------
classes_ : ndarray of shape (n_classes,)
Unique class labels.

estimator_ : estimator instance
The fitted estimator used to select features.

grid_scores_ : ndarray of shape (n_subsets_of_features)
The cross-validation scores such that
``grid_scores_[i]`` corresponds to
the CV score of the i-th subset of features.

n_features_ : int
The number of selected features with cross-validation.

support_ : array of shape [n_features]
The mask of selected features.

ranking_ : array of shape [n_features]
ranking_ : narray of shape (n_features,)
The feature ranking, such that `ranking_[i]`
corresponds to the ranking
position of the i-th feature.
Selected (i.e., estimated best)
features are assigned rank 1.

grid_scores_ : array of shape [n_subsets_of_features]
The cross-validation scores such that
``grid_scores_[i]`` corresponds to
the CV score of the i-th subset of features.

estimator_ : object
The external estimator fit on the reduced dataset.
support_ : ndarray of shape (n_features,)
The mask of selected features.

Notes
-----
Expand Down
16 changes: 10 additions & 6 deletions sklearn/linear_model/_perceptron.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,20 +97,24 @@ class Perceptron(BaseSGDClassifier):

Attributes
----------
coef_ : ndarray of shape = [1, n_features] if n_classes == 2 else \
[n_classes, n_features]
classes_ : ndarray of shape (n_classes,)
The unique classes labels.

coef_ : ndarray of shape (1, n_features) if n_classes == 2 else \
(n_classes, n_features)
Weights assigned to the features.

intercept_ : ndarray of shape = [1] if n_classes == 2 else [n_classes]
intercept_ : ndarray of shape (1,) if n_classes == 2 else (n_classes,)
Constants in decision function.

loss_function_ : concrete LossFunction
The function that determines the loss, or difference between the
output of the algorithm and the target values.

n_iter_ : int
The actual number of iterations to reach the stopping criterion.
For multiclass fits, it is the maximum over every binary fit.

classes_ : ndarray of shape (n_classes,)
The unique classes labels.

t_ : int
Number of weight updates performed during training.
Same as ``(n_iter_ * n_samples)``.
Expand Down
12 changes: 11 additions & 1 deletion sklearn/neighbors/_classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,9 @@ class KNeighborsClassifier(NeighborsBase, KNeighborsMixin,
`p` parameter value if the `effective_metric_` attribute is set to
'minkowski'.

n_samples_fit_ : int
Number of samples in the fitted data.

outputs_2d_ : bool
False when `y`'s shape is (n_samples, ) or (n_samples, 1) during fit
otherwise True.
Expand Down Expand Up @@ -344,6 +347,13 @@ class RadiusNeighborsClassifier(NeighborsBase, RadiusNeighborsMixin,
`p` parameter value if the `effective_metric_` attribute is set to
'minkowski'.

n_samples_fit_ : int
Number of samples in the fitted data.

outlier_label_ : int or array-like of shape (n_class,)
Label which is given for outlier samples (samples with no neighbors
on given radius).

outputs_2d_ : bool
False when `y`'s shape is (n_samples, ) or (n_samples, 1) during fit
otherwise True.
Expand Down Expand Up @@ -419,7 +429,7 @@ def fit(self, X, y):

elif self.outlier_label == 'most_frequent':
outlier_label_ = []
# iterate over multi-output, get the most frequest label for each
# iterate over multi-output, get the most frequent label for each
# output.
for k, classes_k in enumerate(classes_):
label_count = np.bincount(_y[:, k])
Expand Down
32 changes: 32 additions & 0 deletions sklearn/neighbors/_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -275,6 +275,22 @@ class KNeighborsTransformer(KNeighborsMixin, UnsupervisedMixin,
The number of parallel jobs to run for neighbors search.
If ``-1``, then the number of jobs is set to the number of CPU cores.

Attributes
----------
effective_metric_ : str or callable
The distance metric used. It will be same as the `metric` parameter
or a synonym of it, e.g. 'euclidean' if the `metric` parameter set to
'minkowski' and `p` parameter set to 2.

effective_metric_params_ : dict
Additional keyword arguments for the metric function. For most metrics
will be same with `metric_params` parameter, but may also contain the
`p` parameter value if the `effective_metric_` attribute is set to
'minkowski'.

n_samples_fit_ : int
Number of samples in the fitted data.

Examples
--------
>>> from sklearn.manifold import Isomap
Expand Down Expand Up @@ -417,6 +433,22 @@ class RadiusNeighborsTransformer(RadiusNeighborsMixin, UnsupervisedMixin,
The number of parallel jobs to run for neighbors search.
If ``-1``, then the number of jobs is set to the number of CPU cores.

Attributes
----------
effective_metric_ : str or callable
The distance metric used. It will be same as the `metric` parameter
or a synonym of it, e.g. 'euclidean' if the `metric` parameter set to
'minkowski' and `p` parameter set to 2.

effective_metric_params_ : dict
Additional keyword arguments for the metric function. For most metrics
will be same with `metric_params` parameter, but may also contain the
`p` parameter value if the `effective_metric_` attribute is set to
'minkowski'.

n_samples_fit_ : int
Number of samples in the fitted data.

Examples
--------
>>> from sklearn.cluster import DBSCAN
Expand Down
6 changes: 6 additions & 0 deletions sklearn/neighbors/_regression.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,9 @@ class KNeighborsRegressor(NeighborsBase, KNeighborsMixin,
`p` parameter value if the `effective_metric_` attribute is set to
'minkowski'.

n_samples_fit_ : int
Number of samples in the fitted data.

Examples
--------
>>> X = [[0], [1], [2], [3]]
Expand Down Expand Up @@ -283,6 +286,9 @@ class RadiusNeighborsRegressor(NeighborsBase, RadiusNeighborsMixin,
`p` parameter value if the `effective_metric_` attribute is set to
'minkowski'.

n_samples_fit_ : int
Number of samples in the fitted data.

Examples
--------
>>> X = [[0], [1], [2], [3]]
Expand Down
33 changes: 19 additions & 14 deletions sklearn/neighbors/_unsupervised.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,22 +73,27 @@ class NearestNeighbors(KNeighborsMixin, RadiusNeighborsMixin,
effective_metric_params_ : dict
Parameters for the metric used to compute distances to neighbors.

n_samples_fit_ : int
Number of samples in the fitted data.

Examples
--------
>>> import numpy as np
>>> from sklearn.neighbors import NearestNeighbors
>>> samples = [[0, 0, 2], [1, 0, 0], [0, 0, 1]]

>>> neigh = NearestNeighbors(n_neighbors=2, radius=0.4)
>>> neigh.fit(samples)
NearestNeighbors(...)

>>> neigh.kneighbors([[0, 0, 1.3]], 2, return_distance=False)
array([[2, 0]]...)

>>> nbrs = neigh.radius_neighbors([[0, 0, 1.3]], 0.4, return_distance=False)
>>> np.asarray(nbrs[0][0])
array(2)
>>> import numpy as np
>>> from sklearn.neighbors import NearestNeighbors
>>> samples = [[0, 0, 2], [1, 0, 0], [0, 0, 1]]

>>> neigh = NearestNeighbors(n_neighbors=2, radius=0.4)
>>> neigh.fit(samples)
NearestNeighbors(...)

>>> neigh.kneighbors([[0, 0, 1.3]], 2, return_distance=False)
array([[2, 0]]...)

>>> nbrs = neigh.radius_neighbors(
... [[0, 0, 1.3]], 0.4, return_distance=False
... )
>>> np.asarray(nbrs[0][0])
array(2)

See also
--------
Expand Down
Loading