Skip to content

[MRG+1] DOC insert spaces before colons in parameter lists #7920

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Nov 25, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions sklearn/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,10 @@ def clone(estimator, safe=True):

Parameters
----------
estimator: estimator object, or list, tuple or set of objects
estimator : estimator object, or list, tuple or set of objects
The estimator or group of estimators to be cloned

safe: boolean, optional
safe : boolean, optional
If safe is false, clone will fall back to a deepcopy on objects
that are not estimators.

Expand Down Expand Up @@ -134,13 +134,13 @@ def _pprint(params, offset=0, printer=repr):

Parameters
----------
params: dict
params : dict
The dictionary to pretty print

offset: int
offset : int
The offset in characters to add at the begin of each line.

printer:
printer : callable
The function to convert entries to strings, typically
the builtin str or repr

Expand Down Expand Up @@ -510,7 +510,7 @@ def score(self, X, y=None):

Returns
-------
score: float
score : float
"""
pass

Expand Down
2 changes: 1 addition & 1 deletion sklearn/calibration.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ class CalibratedClassifierCV(BaseEstimator, ClassifierMixin):
classes_ : array, shape (n_classes)
The class labels.

calibrated_classifiers_: list (len() equal to cv or 1 if cv == "prefit")
calibrated_classifiers_ : list (len() equal to cv or 1 if cv == "prefit")
The list of calibrated classifiers, one for each crossvalidation fold,
which has been fitted on all but the validation fold and calibrated
on the validation fold.
Expand Down
30 changes: 15 additions & 15 deletions sklearn/cluster/_k_means.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -180,24 +180,24 @@ def _mini_batch_update_csr(X, np.ndarray[DOUBLE, ndim=1] x_squared_norms,
Parameters
----------

X: CSR matrix, dtype float
X : CSR matrix, dtype float
The complete (pre allocated) training set as a CSR matrix.

centers: array, shape (n_clusters, n_features)
centers : array, shape (n_clusters, n_features)
The cluster centers

counts: array, shape (n_clusters,)
counts : array, shape (n_clusters,)
The vector in which we keep track of the numbers of elements in a
cluster

Returns
-------
inertia: float
inertia : float
The inertia of the batch prior to centers update, i.e. the sum
distances to the closest center for each sample. This is the objective
function being minimized by the k-means algorithm.

squared_diff: float
squared_diff : float
The sum of squared update (squared norm of the centers position
change). If compute_squared_diff is 0, this computation is skipped and
0.0 is returned instead.
Expand Down Expand Up @@ -281,20 +281,20 @@ def _centers_dense(np.ndarray[floating, ndim=2] X,

Parameters
----------
X: array-like, shape (n_samples, n_features)
X : array-like, shape (n_samples, n_features)

labels: array of integers, shape (n_samples)
labels : array of integers, shape (n_samples)
Current label assignment

n_clusters: int
n_clusters : int
Number of desired clusters

distances: array-like, shape (n_samples)
distances : array-like, shape (n_samples)
Distance to closest cluster for each sample.

Returns
-------
centers: array, shape (n_clusters, n_features)
centers : array, shape (n_clusters, n_features)
The resulting centers
"""
## TODO: add support for CSR input
Expand Down Expand Up @@ -342,20 +342,20 @@ def _centers_sparse(X, np.ndarray[INT, ndim=1] labels, n_clusters,

Parameters
----------
X: scipy.sparse.csr_matrix, shape (n_samples, n_features)
X : scipy.sparse.csr_matrix, shape (n_samples, n_features)

labels: array of integers, shape (n_samples)
labels : array of integers, shape (n_samples)
Current label assignment

n_clusters: int
n_clusters : int
Number of desired clusters

distances: array-like, shape (n_samples)
distances : array-like, shape (n_samples)
Distance to closest cluster for each sample.

Returns
-------
centers: array, shape (n_clusters, n_features)
centers : array, shape (n_clusters, n_features)
The resulting centers
"""
cdef int n_features = X.shape[1]
Expand Down
2 changes: 1 addition & 1 deletion sklearn/cluster/affinity_propagation_.py
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ def fit(self, X, y=None):
Parameters
----------

X: array-like, shape (n_samples, n_features) or (n_samples, n_samples)
X : array-like, shape (n_samples, n_features) or (n_samples, n_samples)
Data matrix or, if affinity is ``precomputed``, matrix of
similarities / affinities.
"""
Expand Down
4 changes: 2 additions & 2 deletions sklearn/cluster/birch.py
Original file line number Diff line number Diff line change
Expand Up @@ -481,7 +481,7 @@ def _get_leaves(self):

Returns
-------
leaves: array-like
leaves : array-like
List of the leaf nodes.
"""
leaf_ptr = self.dummy_leaf_.next_leaf_
Expand Down Expand Up @@ -538,7 +538,7 @@ def predict(self, X):

Returns
-------
labels: ndarray, shape(n_samples)
labels : ndarray, shape(n_samples)
Labelled data.
"""
X = check_array(X, accept_sparse='csr')
Expand Down
2 changes: 1 addition & 1 deletion sklearn/cluster/hierarchical.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ def ward_tree(X, connectivity=None, n_clusters=None, return_distance=False):
limited use, and the 'parents' output should rather be used.
This option is valid only when specifying a connectivity matrix.

return_distance: bool (optional)
return_distance : bool (optional)
If True, return the distance between the clusters.

Returns
Expand Down
56 changes: 28 additions & 28 deletions sklearn/cluster/k_means_.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,20 +47,20 @@ def _k_init(X, n_clusters, x_squared_norms, random_state, n_local_trials=None):

Parameters
-----------
X: array or sparse matrix, shape (n_samples, n_features)
X : array or sparse matrix, shape (n_samples, n_features)
The data to pick seeds for. To avoid memory copy, the input data
should be double precision (dtype=np.float64).

n_clusters: integer
n_clusters : integer
The number of seeds to choose

x_squared_norms: array, shape (n_samples,)
x_squared_norms : array, shape (n_samples,)
Squared Euclidean norm of each data point.

random_state: numpy.RandomState
random_state : numpy.RandomState
The generator used to initialize the centers.

n_local_trials: integer, optional
n_local_trials : integer, optional
The number of seeding trials for each center (except the first),
of which the one reducing inertia the most is greedily chosen.
Set to None to make the number of trials depend logarithmically
Expand Down Expand Up @@ -267,7 +267,7 @@ def k_means(X, n_clusters, init='k-means++', precompute_distances='auto',
The final value of the inertia criterion (sum of squared distances to
the closest centroid for all observations in the training set).

best_n_iter: int
best_n_iter : int
Number of iterations corresponding to the best results.
Returned only if `return_n_iter` is set to True.

Expand Down Expand Up @@ -409,17 +409,17 @@ def _kmeans_single_lloyd(X, n_clusters, max_iter=300, init='k-means++',

Parameters
----------
X: array-like of floats, shape (n_samples, n_features)
X : array-like of floats, shape (n_samples, n_features)
The observations to cluster.

n_clusters: int
n_clusters : int
The number of clusters to form as well as the number of
centroids to generate.

max_iter: int, optional, default 300
max_iter : int, optional, default 300
Maximum number of iterations of the k-means algorithm to run.

init: {'k-means++', 'random', or ndarray, or a callable}, optional
init : {'k-means++', 'random', or ndarray, or a callable}, optional
Method for initialization, default to 'k-means++':

'k-means++' : selects initial cluster centers for k-mean
Expand All @@ -435,33 +435,33 @@ def _kmeans_single_lloyd(X, n_clusters, max_iter=300, init='k-means++',
If a callable is passed, it should take arguments X, k and
and a random state and return an initialization.

tol: float, optional
tol : float, optional
The relative increment in the results before declaring convergence.

verbose: boolean, optional
verbose : boolean, optional
Verbosity mode

x_squared_norms: array
x_squared_norms : array
Precomputed x_squared_norms.

precompute_distances : boolean, default: True
Precompute distances (faster but takes more memory).

random_state: integer or numpy.RandomState, optional
random_state : integer or numpy.RandomState, optional
The generator used to initialize the centers. If an integer is
given, it fixes the seed. Defaults to the global numpy random
number generator.

Returns
-------
centroid: float ndarray with shape (k, n_features)
centroid : float ndarray with shape (k, n_features)
Centroids found at the last iteration of k-means.

label: integer ndarray with shape (n_samples,)
label : integer ndarray with shape (n_samples,)
label[i] is the code or index of the centroid the
i'th observation is closest to.

inertia: float
inertia : float
The final value of the inertia criterion (sum of squared distances to
the closest centroid for all observations in the training set).

Expand Down Expand Up @@ -577,26 +577,26 @@ def _labels_inertia(X, x_squared_norms, centers,

Parameters
----------
X: float64 array-like or CSR sparse matrix, shape (n_samples, n_features)
X : float64 array-like or CSR sparse matrix, shape (n_samples, n_features)
The input samples to assign to the labels.

x_squared_norms: array, shape (n_samples,)
x_squared_norms : array, shape (n_samples,)
Precomputed squared euclidean norm of each data point, to speed up
computations.

centers: float array, shape (k, n_features)
centers : float array, shape (k, n_features)
The cluster centers.

precompute_distances : boolean, default: True
Precompute distances (faster but takes more memory).

distances: float array, shape (n_samples,)
distances : float array, shape (n_samples,)
Pre-allocated array to be filled in with each sample's distance
to the closest center.

Returns
-------
labels: int array of shape(n)
labels : int array of shape(n)
The resulting assignment

inertia : float
Expand Down Expand Up @@ -628,20 +628,20 @@ def _init_centroids(X, k, init, random_state=None, x_squared_norms=None,
Parameters
----------

X: array, shape (n_samples, n_features)
X : array, shape (n_samples, n_features)

k: int
k : int
number of centroids

init: {'k-means++', 'random' or ndarray or callable} optional
init : {'k-means++', 'random' or ndarray or callable} optional
Method for initialization

random_state: integer or numpy.RandomState, optional
random_state : integer or numpy.RandomState, optional
The generator used to initialize the centers. If an integer is
given, it fixes the seed. Defaults to the global numpy random
number generator.

x_squared_norms: array, shape (n_samples,), optional
x_squared_norms : array, shape (n_samples,), optional
Squared euclidean norm of each data point. Pass it if you have it at
hands already to avoid it being recomputed here. Default: None

Expand All @@ -653,7 +653,7 @@ def _init_centroids(X, k, init, random_state=None, x_squared_norms=None,

Returns
-------
centers: array, shape(k, n_features)
centers : array, shape(k, n_features)
"""
random_state = check_random_state(random_state)
n_samples = X.shape[0]
Expand Down
2 changes: 1 addition & 1 deletion sklearn/cluster/spectral.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ def discretize(vectors, copy=True, max_svd_restarts=30, n_iter_max=20,
Maximum number of iterations to attempt in rotation and partition
matrix search if machine precision convergence is not reached

random_state: int seed, RandomState instance, or None (default)
random_state : int seed, RandomState instance, or None (default)
A pseudo random number generator used for the initialization of the
of the rotation matrix

Expand Down
4 changes: 2 additions & 2 deletions sklearn/covariance/graph_lasso_.py
Original file line number Diff line number Diff line change
Expand Up @@ -461,7 +461,7 @@ class GraphLassoCV(GraphLasso):
grid to be used. See the notes in the class docstring for
more details.

n_refinements: strictly positive integer
n_refinements : strictly positive integer
The number of times the grid is refined. Not used if explicit
values of alphas are passed.

Expand Down Expand Up @@ -492,7 +492,7 @@ class GraphLassoCV(GraphLasso):
max_iter : integer, optional
Maximum number of iterations.

mode: {'cd', 'lars'}
mode : {'cd', 'lars'}
The Lasso solver to use: coordinate descent or LARS. Use LARS for
very sparse underlying graphs, where number of features is greater
than number of samples. Elsewhere prefer cd which is more numerically
Expand Down
6 changes: 3 additions & 3 deletions sklearn/covariance/shrunk_covariance_.py
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ def ledoit_wolf_shrinkage(X, assume_centered=False, block_size=1000):

Returns
-------
shrinkage: float
shrinkage : float
Coefficient in the convex combination used for the computation
of the shrunk estimate.

Expand Down Expand Up @@ -496,7 +496,7 @@ class OAS(EmpiricalCovariance):
store_precision : bool, default=True
Specify if the estimated precision is stored.

assume_centered: bool, default=False
assume_centered : bool, default=False
If True, data are not centered before computation.
Useful when working with data whose mean is almost, but not exactly
zero.
Expand Down Expand Up @@ -545,7 +545,7 @@ def fit(self, X, y=None):

Returns
-------
self: object
self : object
Returns self.

"""
Expand Down
Loading