Skip to content

[MRG] Add pprint for estimators - continued #11705

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 52 commits into from
Dec 20, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
acc3732
add pprint for estimators
amueller Jun 10, 2017
c99c85a
strip color from length, add color option
amueller Jun 10, 2017
a2ef6dc
Minor cleaning, fixes, factoring and docs
NicolasHug Jul 27, 2018
fd543e8
Added some basic tests
NicolasHug Jul 28, 2018
64932cd
Fixed line length issue
NicolasHug Jul 29, 2018
df281fb
fixed flake8 and added visual test for review
NicolasHug Jul 29, 2018
05df6ad
Fixed test
NicolasHug Jul 29, 2018
01db0c6
Merge branch 'master' of https://github.com/scikit-learn/scikit-learn…
NicolasHug Jul 29, 2018
89ee958
Fixed Python 2 issues (inspect.signature import)
NicolasHug Jul 30, 2018
f6450db
Trying to fix flake8 again
NicolasHug Jul 30, 2018
2079d78
Added special repr for functions
NicolasHug Aug 16, 2018
1a3c380
Added some other visual tests
NicolasHug Aug 16, 2018
cd39e8d
Changed _format_function in to _format_callable
NicolasHug Aug 17, 2018
d45bd0a
Consistent output in Python 2 and 3
NicolasHug Sep 4, 2018
30431a7
WIP
NicolasHug Sep 10, 2018
6a547d6
Now using the builtin pprint module
NicolasHug Sep 11, 2018
d82afee
pep8
NicolasHug Sep 11, 2018
4a98b5a
Added changed_only param
NicolasHug Sep 11, 2018
5a64453
Fixed printing when string would fit in less than line width
NicolasHug Sep 13, 2018
4f8c450
Fixed printing of steps parameter
NicolasHug Sep 13, 2018
68d1806
Fixed changed_only param for short estimators
NicolasHug Sep 13, 2018
9afcd0b
fixed pep8
NicolasHug Sep 13, 2018
07914e4
Added some more description in docstring
NicolasHug Sep 28, 2018
54ff3a8
changed_only is now an option from set_config()
NicolasHug Sep 28, 2018
4a3bb04
Put _pprint.py into sklearn/utils, added tests
NicolasHug Oct 1, 2018
eb9a171
Merge branch 'master' of https://github.com/scikit-learn/scikit-learn…
NicolasHug Oct 1, 2018
8b5b283
Added doctest NORMALIZE_WHITESPACE where needed
NicolasHug Oct 1, 2018
f0ed05f
Fixed tests
NicolasHug Oct 1, 2018
e785d22
fix test-doc
NicolasHug Oct 1, 2018
b64258e
fixing test that passed before....
NicolasHug Oct 1, 2018
14eac3b
Merge branch 'master' into pr/9099
NicolasHug Oct 24, 2018
2932be8
Fixed tests
NicolasHug Oct 24, 2018
6e62480
Added test for changed_only and long lines
NicolasHug Nov 19, 2018
191b421
typo
NicolasHug Nov 19, 2018
4942b97
Added authors names
NicolasHug Nov 27, 2018
7560f24
Added license file
NicolasHug Nov 28, 2018
5e23560
Added ellipsis based on number of elements in sequence + added increa…
NicolasHug Dec 9, 2018
19073c7
Updated whatsnew
NicolasHug Dec 9, 2018
92ecd48
dont use increaingly aggressive strategy
NicolasHug Dec 12, 2018
9170019
Merge branch 'master' into pr/9099
NicolasHug Dec 14, 2018
69fd6b4
Fixed tests
NicolasHug Dec 14, 2018
43216c8
Removed LICENSE file and put license text in _pprint.py
NicolasHug Dec 14, 2018
826c296
fixed test_base
NicolasHug Dec 14, 2018
99a1634
Sorted parameters dictionary for consistent output in 3.5
NicolasHug Dec 14, 2018
1a8a0ec
Actually using OrderedDict...
NicolasHug Dec 14, 2018
5ab28f9
Addressed comments
NicolasHug Dec 17, 2018
90f9543
Added test for NaN changed parameter
NicolasHug Dec 17, 2018
f2808a1
Update whatsnew
NicolasHug Dec 18, 2018
ab639ae
Added example to set_config()
NicolasHug Dec 18, 2018
c48d713
Removed example
NicolasHug Dec 18, 2018
4e06804
Added example in gallery
NicolasHug Dec 19, 2018
affaae5
Spelling
jnothman Dec 19, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions doc/modules/compose.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,13 +76,13 @@ filling in the names automatically::

The estimators of a pipeline are stored as a list in the ``steps`` attribute::

>>> pipe.steps[0]
>>> pipe.steps[0] # doctest: +NORMALIZE_WHITESPACE
('reduce_dim', PCA(copy=True, iterated_power='auto', n_components=None, random_state=None,
svd_solver='auto', tol=0.0, whiten=False))

and as a ``dict`` in ``named_steps``::

>>> pipe.named_steps['reduce_dim']
>>> pipe.named_steps['reduce_dim'] # doctest: +NORMALIZE_WHITESPACE
PCA(copy=True, iterated_power='auto', n_components=None, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)

Expand Down
4 changes: 2 additions & 2 deletions doc/modules/linear_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ for another implementation::

>>> from sklearn import linear_model
>>> reg = linear_model.Lasso(alpha=0.1)
>>> reg.fit([[0, 0], [1, 1]], [0, 1])
>>> reg.fit([[0, 0], [1, 1]], [0, 1]) # doctest: +NORMALIZE_WHITESPACE
Lasso(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=1000,
normalize=False, positive=False, precompute=False, random_state=None,
selection='cyclic', tol=0.0001, warm_start=False)
Expand Down Expand Up @@ -639,7 +639,7 @@ Bayesian Ridge Regression is used for regression::
>>> X = [[0., 0.], [1., 1.], [2., 2.], [3., 3.]]
>>> Y = [0., 1., 2., 3.]
>>> reg = linear_model.BayesianRidge()
>>> reg.fit(X, Y)
>>> reg.fit(X, Y) # doctest: +NORMALIZE_WHITESPACE
BayesianRidge(alpha_1=1e-06, alpha_2=1e-06, compute_score=False, copy_X=True,
fit_intercept=True, lambda_1=1e-06, lambda_2=1e-06, n_iter=300,
normalize=False, tol=0.001, verbose=False)
Expand Down
4 changes: 2 additions & 2 deletions doc/modules/model_evaluation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -979,7 +979,7 @@ with a svm classifier in a binary class problem::
>>> X = [[0], [1]]
>>> y = [-1, 1]
>>> est = svm.LinearSVC(random_state=0)
>>> est.fit(X, y)
>>> est.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,
intercept_scaling=1, loss='squared_hinge', max_iter=1000,
multi_class='ovr', penalty='l2', random_state=0, tol=0.0001,
Expand All @@ -997,7 +997,7 @@ with a svm classifier in a multiclass problem::
>>> Y = np.array([0, 1, 2, 3])
>>> labels = np.array([0, 1, 2, 3])
>>> est = svm.LinearSVC()
>>> est.fit(X, Y)
>>> est.fit(X, Y) # doctest: +NORMALIZE_WHITESPACE
LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,
intercept_scaling=1, loss='squared_hinge', max_iter=1000,
multi_class='ovr', penalty='l2', random_state=None, tol=0.0001,
Expand Down
6 changes: 3 additions & 3 deletions doc/modules/preprocessing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -488,7 +488,7 @@ Continuing the example above::

>>> enc = preprocessing.OneHotEncoder()
>>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']]
>>> enc.fit(X) # doctest: +ELLIPSIS
>>> enc.fit(X) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
OneHotEncoder(categorical_features=None, categories=None,
dtype=<... 'numpy.float64'>, handle_unknown='error',
n_values=None, sparse=True)
Expand All @@ -514,7 +514,7 @@ dataset::
>>> # Note that for there are missing categorical values for the 2nd and 3rd
>>> # feature
>>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']]
>>> enc.fit(X) # doctest: +ELLIPSIS
>>> enc.fit(X) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
OneHotEncoder(categorical_features=None,
categories=[...],
dtype=<... 'numpy.float64'>, handle_unknown='error',
Expand All @@ -532,7 +532,7 @@ columns for this feature will be all zeros

>>> enc = preprocessing.OneHotEncoder(handle_unknown='ignore')
>>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']]
>>> enc.fit(X) # doctest: +ELLIPSIS
>>> enc.fit(X) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
OneHotEncoder(categorical_features=None, categories=None,
dtype=<... 'numpy.float64'>, handle_unknown='ignore',
n_values=None, sparse=True)
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorial/statistical_inference/model_selection.rst
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ parameter automatically by cross-validation::
>>> diabetes = datasets.load_diabetes()
>>> X_diabetes = diabetes.data
>>> y_diabetes = diabetes.target
>>> lasso.fit(X_diabetes, y_diabetes)
>>> lasso.fit(X_diabetes, y_diabetes) # doctest: +NORMALIZE_WHITESPACE
LassoCV(alphas=None, copy_X=True, cv=3, eps=0.001, fit_intercept=True,
max_iter=1000, n_alphas=100, n_jobs=None, normalize=False,
positive=False, precompute='auto', random_state=None,
Expand Down
1 change: 1 addition & 0 deletions doc/tutorial/statistical_inference/supervised_learning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -334,6 +334,7 @@ application of Occam's razor: *prefer simpler models*.
>>> best_alpha = alphas[scores.index(max(scores))]
>>> regr.alpha = best_alpha
>>> regr.fit(diabetes_X_train, diabetes_y_train)
... # doctest: +NORMALIZE_WHITESPACE
Lasso(alpha=0.025118864315095794, copy_X=True, fit_intercept=True,
max_iter=1000, normalize=False, positive=False, precompute=False,
random_state=None, selection='cyclic', tol=0.0001, warm_start=False)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@ data by projecting on a principal subspace.

>>> from sklearn import decomposition
>>> pca = decomposition.PCA()
>>> pca.fit(X)
>>> pca.fit(X) # doctest: +NORMALIZE_WHITESPACE
PCA(copy=True, iterated_power='auto', n_components=None, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
>>> print(pca.explained_variance_) # doctest: +SKIP
Expand Down
9 changes: 8 additions & 1 deletion doc/whats_new/v0.21.rst
Original file line number Diff line number Diff line change
Expand Up @@ -185,12 +185,19 @@ Support for Python 3.4 and below has been officially dropped.
``max_depth`` by 1 while expanding the tree if ``max_leaf_nodes`` and
``max_depth`` were both specified by the user. Please note that this also
affects all ensemble methods using decision trees.
:pr:`12344` by :user:`Adrin Jalali <adrinjalali>`.
:issue:`12344` by :user:`Adrin Jalali <adrinjalali>`.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adrinjalali I took the liberty to fix this



Multiple modules
................

- The `__repr__()` method of all estimators (used when calling
`print(estimator)`) has been entirely re-written, building on Python's
pretty printing standard library. All parameters are printed by default,
but this can be altered with the ``print_changed_only`` option in
:func:`sklearn.set_config`. :issue:`11705` by :user:`Nicolas Hug
<NicolasHug>`.

Changes to estimator checks
---------------------------

Expand Down
30 changes: 30 additions & 0 deletions examples/plot_changed_only_pprint_parameter.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
"""
=================================
Compact estimator representations
=================================

This example illustrates the use of the print_changed_only global parameter.

Setting print_changed_only to True will alterate the representation of
estimators to only show the parameters that have been set to non-default
values. This can be used to have more compact representations.
"""
print(__doc__)

from sklearn.linear_model import LogisticRegression
from sklearn import set_config


lr = LogisticRegression(penalty='l1')
print('Default representation:')
print(lr)
# LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
# intercept_scaling=1, l1_ratio=None, max_iter=100,
# multi_class='warn', n_jobs=None, penalty='l1',
# random_state=None, solver='warn', tol=0.0001, verbose=0,
# warm_start=False)

set_config(print_changed_only=True)
print('\nWith changed_only option:')
print(lr)
# LogisticRegression(penalty='l1')
16 changes: 14 additions & 2 deletions sklearn/_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@

_global_config = {
'assume_finite': bool(os.environ.get('SKLEARN_ASSUME_FINITE', False)),
'working_memory': int(os.environ.get('SKLEARN_WORKING_MEMORY', 1024))
'working_memory': int(os.environ.get('SKLEARN_WORKING_MEMORY', 1024)),
'print_changed_only': False,
}


Expand All @@ -20,7 +21,8 @@ def get_config():
return _global_config.copy()


def set_config(assume_finite=None, working_memory=None):
def set_config(assume_finite=None, working_memory=None,
print_changed_only=None):
"""Set global scikit-learn configuration

.. versionadded:: 0.19
Expand All @@ -43,11 +45,21 @@ def set_config(assume_finite=None, working_memory=None):

.. versionadded:: 0.20

print_changed_only : bool, optional
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would this be clearer as print_defaults?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so because the meaning of the parameter is really "only show the parameters that don't have their default values, i.e. the ones that are changed"

"print_defaults" would mean something quite different.

Maybe "hide_defaults"? Or "print_hide_defaults"?

If True, only the parameters that were set to non-default
values will be printed when printing an estimator. For example,
``print(SVC())`` while True will only print 'SVC()' while the default
behaviour would be to print 'SVC(C=1.0, cache_size=200, ...)' with
all the non-changed parameters.

.. versionadded:: 0.21
"""
if assume_finite is not None:
_global_config['assume_finite'] = assume_finite
if working_memory is not None:
_global_config['working_memory'] = working_memory
if print_changed_only is not None:
_global_config['print_changed_only'] = print_changed_only


@contextmanager
Expand Down
20 changes: 17 additions & 3 deletions sklearn/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -224,9 +224,23 @@ def set_params(self, **params):
return self

def __repr__(self):
class_name = self.__class__.__name__
return '%s(%s)' % (class_name, _pprint(self.get_params(deep=False),
offset=len(class_name),),)
from .utils._pprint import _EstimatorPrettyPrinter

N_CHAR_MAX = 700 # number of non-whitespace or newline chars
N_MAX_ELEMENTS_TO_SHOW = 30 # number of elements to show in sequences

# use ellipsis for sequences with a lot of elements
pp = _EstimatorPrettyPrinter(
compact=True, indent=1, indent_at_name=True,
n_max_elements_to_show=N_MAX_ELEMENTS_TO_SHOW)

repr_ = pp.pformat(self)

# Use bruteforce ellipsis if string is very long
if len(''.join(repr_.split())) > N_CHAR_MAX: # check non-blank chars
lim = N_CHAR_MAX // 2
repr_ = repr_[:lim] + '...' + repr_[-lim:]
return repr_

def __getstate__(self):
try:
Expand Down
2 changes: 1 addition & 1 deletion sklearn/cluster/hierarchical.py
Original file line number Diff line number Diff line change
Expand Up @@ -919,7 +919,7 @@ class FeatureAgglomeration(AgglomerativeClustering, AgglomerationTransform):
>>> images = digits.images
>>> X = np.reshape(images, (len(images), -1))
>>> agglo = cluster.FeatureAgglomeration(n_clusters=32)
>>> agglo.fit(X) # doctest: +ELLIPSIS
>>> agglo.fit(X) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
FeatureAgglomeration(affinity='euclidean', compute_full_tree='auto',
connectivity=None, linkage='ward', memory=None, n_clusters=32,
pooling_func=...)
Expand Down
4 changes: 2 additions & 2 deletions sklearn/decomposition/pca.py
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ class PCA(_BasePCA):
>>> from sklearn.decomposition import PCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = PCA(n_components=2)
>>> pca.fit(X)
>>> pca.fit(X) # doctest: +NORMALIZE_WHITESPACE
PCA(copy=True, iterated_power='auto', n_components=2, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
>>> print(pca.explained_variance_ratio_) # doctest: +ELLIPSIS
Expand All @@ -294,7 +294,7 @@ class PCA(_BasePCA):
[6.30061... 0.54980...]

>>> pca = PCA(n_components=1, svd_solver='arpack')
>>> pca.fit(X)
>>> pca.fit(X) # doctest: +NORMALIZE_WHITESPACE
PCA(copy=True, iterated_power='auto', n_components=1, random_state=None,
svd_solver='arpack', tol=0.0, whiten=False)
>>> print(pca.explained_variance_ratio_) # doctest: +ELLIPSIS
Expand Down
2 changes: 1 addition & 1 deletion sklearn/discriminant_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ class LinearDiscriminantAnalysis(BaseEstimator, LinearClassifierMixin,
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> clf = LinearDiscriminantAnalysis()
>>> clf.fit(X, y)
>>> clf.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
LinearDiscriminantAnalysis(n_components=None, priors=None, shrinkage=None,
solver='svd', store_covariance=False, tol=0.0001)
>>> print(clf.predict([[-0.8, -1]]))
Expand Down
10 changes: 5 additions & 5 deletions sklearn/ensemble/forest.py
Original file line number Diff line number Diff line change
Expand Up @@ -956,7 +956,7 @@ class labels (multi-output problem).
... random_state=0, shuffle=False)
>>> clf = RandomForestClassifier(n_estimators=100, max_depth=2,
... random_state=0)
>>> clf.fit(X, y)
>>> clf.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=2, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
Expand Down Expand Up @@ -1208,7 +1208,7 @@ class RandomForestRegressor(ForestRegressor):
... random_state=0, shuffle=False)
>>> regr = RandomForestRegressor(max_depth=2, random_state=0,
... n_estimators=100)
>>> regr.fit(X, y)
>>> regr.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=2,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
Expand All @@ -1235,7 +1235,7 @@ class RandomForestRegressor(ForestRegressor):
search of the best split. To obtain a deterministic behaviour during
fitting, ``random_state`` has to be fixed.

The default value ``max_features="auto"`` uses ``n_features``
The default value ``max_features="auto"`` uses ``n_features``
rather than ``n_features / 3``. The latter was originally suggested in
[1], whereas the former was more recently justified empirically in [2].

Expand All @@ -1244,7 +1244,7 @@ class RandomForestRegressor(ForestRegressor):

.. [1] L. Breiman, "Random Forests", Machine Learning, 45(1), 5-32, 2001.

.. [2] P. Geurts, D. Ernst., and L. Wehenkel, "Extremely randomized
.. [2] P. Geurts, D. Ernst., and L. Wehenkel, "Extremely randomized
trees", Machine Learning, 63(1), 3-42, 2006.

See also
Expand Down Expand Up @@ -1496,7 +1496,7 @@ class labels (multi-output problem).
References
----------

.. [1] P. Geurts, D. Ernst., and L. Wehenkel, "Extremely randomized
.. [1] P. Geurts, D. Ernst., and L. Wehenkel, "Extremely randomized
trees", Machine Learning, 63(1), 3-42, 2006.

See also
Expand Down
1 change: 1 addition & 0 deletions sklearn/feature_extraction/dict_vectorizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -345,6 +345,7 @@ def restrict(self, support, indices=False):
>>> v.get_feature_names()
['bar', 'baz', 'foo']
>>> v.restrict(support.get_support()) # doctest: +ELLIPSIS
... # doctest: +NORMALIZE_WHITESPACE
DictVectorizer(dtype=..., separator='=', sort=True,
sparse=True)
>>> v.get_feature_names()
Expand Down
2 changes: 1 addition & 1 deletion sklearn/impute.py
Original file line number Diff line number Diff line change
Expand Up @@ -464,7 +464,7 @@ class MissingIndicator(BaseEstimator, TransformerMixin):
... [np.nan, 2, 3],
... [2, 4, 0]])
>>> indicator = MissingIndicator()
>>> indicator.fit(X1)
>>> indicator.fit(X1) # doctest: +NORMALIZE_WHITESPACE
MissingIndicator(error_on_new=True, features='missing-only',
missing_values=nan, sparse='auto')
>>> X2_tr = indicator.transform(X2)
Expand Down
4 changes: 2 additions & 2 deletions sklearn/kernel_approximation.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ class SkewedChi2Sampler(BaseEstimator, TransformerMixin):
... random_state=0)
>>> X_features = chi2_feature.fit_transform(X, y)
>>> clf = SGDClassifier(max_iter=10, tol=1e-3)
>>> clf.fit(X_features, y)
>>> clf.fit(X_features, y) # doctest: +NORMALIZE_WHITESPACE
SGDClassifier(alpha=0.0001, average=False, class_weight=None,
early_stopping=False, epsilon=0.1, eta0=0.0, fit_intercept=True,
l1_ratio=0.15, learning_rate='optimal', loss='hinge', max_iter=10,
Expand Down Expand Up @@ -283,7 +283,7 @@ class AdditiveChi2Sampler(BaseEstimator, TransformerMixin):
>>> chi2sampler = AdditiveChi2Sampler(sample_steps=2)
>>> X_transformed = chi2sampler.fit_transform(X, y)
>>> clf = SGDClassifier(max_iter=5, random_state=0, tol=1e-3)
>>> clf.fit(X_transformed, y)
>>> clf.fit(X_transformed, y) # doctest: +NORMALIZE_WHITESPACE
SGDClassifier(alpha=0.0001, average=False, class_weight=None,
early_stopping=False, epsilon=0.1, eta0=0.0, fit_intercept=True,
l1_ratio=0.15, learning_rate='optimal', loss='hinge', max_iter=5,
Expand Down
6 changes: 4 additions & 2 deletions sklearn/linear_model/coordinate_descent.py
Original file line number Diff line number Diff line change
Expand Up @@ -623,7 +623,7 @@ class ElasticNet(LinearModel, RegressorMixin):

>>> X, y = make_regression(n_features=2, random_state=0)
>>> regr = ElasticNet(random_state=0)
>>> regr.fit(X, y)
>>> regr.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
ElasticNet(alpha=1.0, copy_X=True, fit_intercept=True, l1_ratio=0.5,
max_iter=1000, normalize=False, positive=False, precompute=False,
random_state=0, selection='cyclic', tol=0.0001, warm_start=False)
Expand Down Expand Up @@ -903,6 +903,7 @@ class Lasso(ElasticNet):
>>> from sklearn import linear_model
>>> clf = linear_model.Lasso(alpha=0.1)
>>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])
... # doctest: +NORMALIZE_WHITESPACE
Lasso(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=1000,
normalize=False, positive=False, precompute=False, random_state=None,
selection='cyclic', tol=0.0001, warm_start=False)
Expand Down Expand Up @@ -1552,7 +1553,7 @@ class ElasticNetCV(LinearModelCV, RegressorMixin):

>>> X, y = make_regression(n_features=2, random_state=0)
>>> regr = ElasticNetCV(cv=5, random_state=0)
>>> regr.fit(X, y)
>>> regr.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
ElasticNetCV(alphas=None, copy_X=True, cv=5, eps=0.001, fit_intercept=True,
l1_ratio=0.5, max_iter=1000, n_alphas=100, n_jobs=None,
normalize=False, positive=False, precompute='auto', random_state=0,
Expand Down Expand Up @@ -1907,6 +1908,7 @@ class MultiTaskLasso(MultiTaskElasticNet):
>>> from sklearn import linear_model
>>> clf = linear_model.MultiTaskLasso(alpha=0.1)
>>> clf.fit([[0,0], [1, 1], [2, 2]], [[0, 0], [1, 1], [2, 2]])
... # doctest: +NORMALIZE_WHITESPACE
MultiTaskLasso(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=1000,
normalize=False, random_state=None, selection='cyclic', tol=0.0001,
warm_start=False)
Expand Down
4 changes: 2 additions & 2 deletions sklearn/linear_model/passive_aggressive.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ class PassiveAggressiveClassifier(BaseSGDClassifier):
>>> X, y = make_classification(n_features=4, random_state=0)
>>> clf = PassiveAggressiveClassifier(max_iter=1000, random_state=0,
... tol=1e-3)
>>> clf.fit(X, y)
>>> clf.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
PassiveAggressiveClassifier(C=1.0, average=False, class_weight=None,
early_stopping=False, fit_intercept=True, loss='hinge',
max_iter=1000, n_iter=None, n_iter_no_change=5, n_jobs=None,
Expand Down Expand Up @@ -380,7 +380,7 @@ class PassiveAggressiveRegressor(BaseSGDRegressor):
>>> X, y = make_regression(n_features=4, random_state=0)
>>> regr = PassiveAggressiveRegressor(max_iter=100, random_state=0,
... tol=1e-3)
>>> regr.fit(X, y)
>>> regr.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
PassiveAggressiveRegressor(C=1.0, average=False, early_stopping=False,
epsilon=0.1, fit_intercept=True, loss='epsilon_insensitive',
max_iter=100, n_iter=None, n_iter_no_change=5,
Expand Down
Loading