Skip to content

DOC Docstring fix for DecisionTreeClassifier and DecisionTreeR… #15483

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 24 commits into from
Nov 5, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
03e0b51
change docstring start to infinitive; change input type to bool
r-build Nov 2, 2019
fc07db2
Merge remote-tracking branch 'upstream/master' into docstring-fix-DTC
r-build Nov 2, 2019
326f62b
Fix .decision_path docstring. Remove blank line before final """; add…
r-build Nov 2, 2019
ec4d6a7
Fix DTC.fit. Change paramter type to bool; add description to return …
r-build Nov 2, 2019
82043fe
Merge remote-tracking branch 'upstream/master' into docstring-fix-DTC
r-build Nov 2, 2019
483e0f1
Fix docstring for DTC.get_depth: Start summary with infinitive; add R…
r-build Nov 2, 2019
6b59695
Fix docstring for DTC.get_n_leaves: Start docstring with infinitve; a…
r-build Nov 2, 2019
40057d0
Merge remote-tracking branch 'upstream/master' into docstring-fix-DTC
r-build Nov 2, 2019
bfd63db
Fxi docstring DTC .predict: change parameter type to bool
r-build Nov 2, 2019
9d34c45
Fix docstring for DTC .predict_log_proba: Edit first Returns line to …
r-build Nov 2, 2019
6cc3744
Fix docstring for DTC .predict_proba: Correct Returns name and line l…
r-build Nov 2, 2019
7c30290
Fxi docstring for DTC: correct capitalisation to `See Also`
r-build Nov 2, 2019
102eb71
fix docstring for DTC: Rremove space after reST `deprecated`
r-build Nov 2, 2019
3d7a11d
fix docstring for DTC: Change parameter type to `str`
r-build Nov 2, 2019
12d486b
fix docstring for DTC: Change parameter type to `str`
r-build Nov 2, 2019
12f7fb4
Fix docstring for DTC: Added some missing `See Also` references (not …
r-build Nov 2, 2019
84f7f54
Fix docstring for DTC: Change parameter type to `str`
r-build Nov 2, 2019
45eb157
Fix docstrings for DTC: Corrected section order (See Also <-> Notes)
r-build Nov 2, 2019
3b251c6
Fix docstring for DTR: remove space between reST `deprecated` and `::`
r-build Nov 2, 2019
1689678
Fix docstring for DTR .fit: Set parameter type to `bool`; Add Returns…
r-build Nov 2, 2019
bb1cd94
Fix docstring for DTR .score: Start docstring summaries with `Return`…
r-build Nov 2, 2019
b06bfbc
Merge remote-tracking branch 'upstream/master' into docstring-fix-DTC
r-build Nov 2, 2019
38caf6f
Merge remote-tracking branch 'upstream/master' into docstring-fix-DTC
r-build Nov 4, 2019
7ee8125
Fix docstrings for DTC: restore text; remove None return type for get…
r-build Nov 4, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions sklearn/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,7 @@ class RegressorMixin:
_estimator_type = "regressor"

def score(self, X, y, sample_weight=None):
"""Returns the coefficient of determination R^2 of the prediction.
"""Return the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the residual
sum of squares ((y_true - y_pred) ** 2).sum() and v is the total
Expand Down Expand Up @@ -507,7 +507,7 @@ def get_shape(self, i):
return tuple(len(i) for i in indices)

def get_submatrix(self, i, data):
"""Returns the submatrix corresponding to bicluster `i`.
"""Return the submatrix corresponding to bicluster `i`.

Parameters
----------
Expand Down Expand Up @@ -573,7 +573,7 @@ class DensityMixin:
_estimator_type = "DensityEstimator"

def score(self, X, y=None):
"""Returns the score of the model on the data X
"""Return the score of the model on the data X

Parameters
----------
Expand Down Expand Up @@ -631,7 +631,7 @@ def _more_tags(self):


def is_classifier(estimator):
"""Returns True if the given estimator is (probably) a classifier.
"""Return True if the given estimator is (probably) a classifier.

Parameters
----------
Expand All @@ -647,7 +647,7 @@ def is_classifier(estimator):


def is_regressor(estimator):
"""Returns True if the given estimator is (probably) a regressor.
"""Return True if the given estimator is (probably) a regressor.

Parameters
----------
Expand All @@ -663,7 +663,7 @@ def is_regressor(estimator):


def is_outlier_detector(estimator):
"""Returns True if the given estimator is (probably) an outlier detector.
"""Return True if the given estimator is (probably) an outlier detector.

Parameters
----------
Expand Down
66 changes: 39 additions & 27 deletions sklearn/tree/_classes.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,16 +112,26 @@ def __init__(self,
self.ccp_alpha = ccp_alpha

def get_depth(self):
"""Returns the depth of the decision tree.
"""Return the depth of the decision tree.

The depth of a tree is the maximum distance between the root
and any leaf.

Returns
-------
self.tree_.max_depth : int
The maximum depth of the tree.
"""
check_is_fitted(self)
return self.tree_.max_depth

def get_n_leaves(self):
"""Returns the number of leaves of the decision tree.
"""Return the number of leaves of the decision tree.

Returns
-------
self.tree_.n_leaves : int
Number of leaves.
"""
check_is_fitted(self)
return self.tree_.n_leaves
Expand Down Expand Up @@ -406,7 +416,7 @@ def predict(self, X, check_input=True):
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.

check_input : boolean, (default=True)
check_input : bool, (default=True)
Allow to bypass several input checking.
Don't use this parameter unless you know what you do.

Expand Down Expand Up @@ -446,7 +456,7 @@ def predict(self, X, check_input=True):

def apply(self, X, check_input=True):
"""
Returns the index of the leaf that each sample is predicted as.
Return the index of the leaf that each sample is predicted as.

.. versionadded:: 0.17

Expand All @@ -457,7 +467,7 @@ def apply(self, X, check_input=True):
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.

check_input : boolean, (default=True)
check_input : bool, (default=True)
Allow to bypass several input checking.
Don't use this parameter unless you know what you do.

Expand All @@ -474,7 +484,7 @@ def apply(self, X, check_input=True):
return self.tree_.apply(X)

def decision_path(self, X, check_input=True):
"""Return the decision path in the tree
"""Return the decision path in the tree.

.. versionadded:: 0.18

Expand All @@ -485,7 +495,7 @@ def decision_path(self, X, check_input=True):
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.

check_input : boolean, (default=True)
check_input : bool, (default=True)
Allow to bypass several input checking.
Don't use this parameter unless you know what you do.

Expand All @@ -494,7 +504,6 @@ def decision_path(self, X, check_input=True):
indicator : sparse csr array, shape = [n_samples, n_nodes]
Return a node indicator matrix where non zero elements
indicates that the samples goes through the nodes.

"""
X = self._validate_X_predict(X, check_input)
return self.tree_.decision_path(X)
Expand Down Expand Up @@ -572,6 +581,7 @@ def feature_importances_(self):
Returns
-------
feature_importances_ : array, shape = [n_features]
Normalized total reduction of critera by feature (Gini importance).
"""
check_is_fitted(self)

Expand Down Expand Up @@ -853,7 +863,7 @@ def fit(self, X, y, sample_weight=None, check_input=True,
ignored if they would result in any single class carrying a
negative weight in either child node.

check_input : boolean, (default=True)
check_input : bool, (default=True)
Allow to bypass several input checking.
Don't use this parameter unless you know what you do.

Expand All @@ -866,6 +876,7 @@ def fit(self, X, y, sample_weight=None, check_input=True,
Returns
-------
self : object
Fitted estimator.
"""

super().fit(
Expand Down Expand Up @@ -897,7 +908,7 @@ class in a leaf.

Returns
-------
p : array of shape (n_samples, n_classes), or a list of n_outputs
proba : array of shape (n_samples, n_classes), or a list of n_outputs \
such arrays if n_outputs > 1.
The class probabilities of the input samples. The order of the
classes corresponds to that in the attribute :term:`classes_`.
Expand Down Expand Up @@ -938,7 +949,7 @@ def predict_log_proba(self, X):

Returns
-------
p : array of shape (n_samples, n_classes), or a list of n_outputs
proba : array of shape (n_samples, n_classes), or a list of n_outputs \
such arrays if n_outputs > 1.
The class log-probabilities of the input samples. The order of the
classes corresponds to that in the attribute :term:`classes_`.
Expand All @@ -962,7 +973,7 @@ class DecisionTreeRegressor(RegressorMixin, BaseDecisionTree):

Parameters
----------
criterion : string, optional (default="mse")
criterion : str, optional (default="mse")
The function to measure the quality of a split. Supported criteria
are "mse" for the mean squared error, which is equal to variance
reduction as feature selection criterion and minimizes the L2 loss
Expand All @@ -974,7 +985,7 @@ class DecisionTreeRegressor(RegressorMixin, BaseDecisionTree):
.. versionadded:: 0.18
Mean Absolute Error (MAE) criterion.

splitter : string, optional (default="best")
splitter : str, optional (default="best")
The strategy used to choose the split at each node. Supported
strategies are "best" to choose the best split and "random" to choose
the best random split.
Expand Down Expand Up @@ -1015,7 +1026,7 @@ class DecisionTreeRegressor(RegressorMixin, BaseDecisionTree):
the input samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided.

max_features : int, float, string or None, optional (default=None)
max_features : int, float, str or None, optional (default=None)
The number of features to consider when looking for the best split:

- If int, then consider `max_features` features at each split.
Expand Down Expand Up @@ -1107,6 +1118,10 @@ class DecisionTreeRegressor(RegressorMixin, BaseDecisionTree):
:ref:`sphx_glr_auto_examples_tree_plot_unveil_tree_structure.py`
for basic usage of these attributes.

See Also
--------
DecisionTreeClassifier : A decision tree classifier.

Notes
-----
The default values for the parameters controlling the size of the trees
Expand All @@ -1122,10 +1137,6 @@ class DecisionTreeRegressor(RegressorMixin, BaseDecisionTree):
split. To obtain a deterministic behaviour during fitting,
``random_state`` has to be fixed.

See also
--------
DecisionTreeClassifier

References
----------

Expand Down Expand Up @@ -1202,7 +1213,7 @@ def fit(self, X, y, sample_weight=None, check_input=True,
that would create child nodes with net zero or negative weight are
ignored while searching for a split in each node.

check_input : boolean, (default=True)
check_input : bool, (default=True)
Allow to bypass several input checking.
Don't use this parameter unless you know what you do.

Expand All @@ -1215,6 +1226,7 @@ def fit(self, X, y, sample_weight=None, check_input=True,
Returns
-------
self : object
Fitted estimator.
"""

super().fit(
Expand Down Expand Up @@ -1257,11 +1269,11 @@ class ExtraTreeClassifier(DecisionTreeClassifier):

Parameters
----------
criterion : string, optional (default="gini")
criterion : str, optional (default="gini")
The function to measure the quality of a split. Supported criteria are
"gini" for the Gini impurity and "entropy" for the information gain.

splitter : string, optional (default="random")
splitter : str, optional (default="random")
The strategy used to choose the split at each node. Supported
strategies are "best" to choose the best split and "random" to choose
the best random split.
Expand Down Expand Up @@ -1302,7 +1314,7 @@ class ExtraTreeClassifier(DecisionTreeClassifier):
the input samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided.

max_features : int, float, string or None, optional (default="auto")
max_features : int, float, str or None, optional (default="auto")
The number of features to consider when looking for the best split:

- If int, then consider `max_features` features at each split.
Expand Down Expand Up @@ -1416,7 +1428,7 @@ class ExtraTreeClassifier(DecisionTreeClassifier):
:ref:`sphx_glr_auto_examples_tree_plot_unveil_tree_structure.py`
for basic usage of these attributes.

See also
See Also
--------
ExtraTreeRegressor, sklearn.ensemble.ExtraTreesClassifier,
sklearn.ensemble.ExtraTreesRegressor
Expand Down Expand Up @@ -1481,7 +1493,7 @@ class ExtraTreeRegressor(DecisionTreeRegressor):

Parameters
----------
criterion : string, optional (default="mse")
criterion : str, optional (default="mse")
The function to measure the quality of a split. Supported criteria
are "mse" for the mean squared error, which is equal to variance
reduction as feature selection criterion, and "mae" for the mean
Expand All @@ -1490,7 +1502,7 @@ class ExtraTreeRegressor(DecisionTreeRegressor):
.. versionadded:: 0.18
Mean Absolute Error (MAE) criterion.

splitter : string, optional (default="random")
splitter : str, optional (default="random")
The strategy used to choose the split at each node. Supported
strategies are "best" to choose the best split and "random" to choose
the best random split.
Expand Down Expand Up @@ -1531,7 +1543,7 @@ class ExtraTreeRegressor(DecisionTreeRegressor):
the input samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided.

max_features : int, float, string or None, optional (default="auto")
max_features : int, float, str or None, optional (default="auto")
The number of features to consider when looking for the best split:

- If int, then consider `max_features` features at each split.
Expand Down Expand Up @@ -1611,7 +1623,7 @@ class ExtraTreeRegressor(DecisionTreeRegressor):
:ref:`sphx_glr_auto_examples_tree_plot_unveil_tree_structure.py`
for basic usage of these attributes.

See also
See Also
--------
ExtraTreeClassifier, sklearn.ensemble.ExtraTreesClassifier,
sklearn.ensemble.ExtraTreesRegressor
Expand Down