Skip to content

Fixed Convergence Warnings On MLP Training Curves #14144

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Jun 24, 2019
Merged

Fixed Convergence Warnings On MLP Training Curves #14144

merged 17 commits into from
Jun 24, 2019

Conversation

martinoywa
Copy link
Contributor

@martinoywa martinoywa commented Jun 22, 2019

Referencing PR / Issue

This addresses #14117 examples/neural_networks/plot_mlp_training_curves.py

Note

There were a lot of Convergence Warnings on the Stochastic Optimizer.
This can be solved by increasing the maximum iterations, but there will be
a trade-off in time taken. So I filtered the warnings by ignoring them.

#WiMLDS_Nairobi
cc: @lagvier @mcecily @toopurity @felexkemboi

@adrinjalali
Copy link
Member

@martinoywa tests failing

@martinoywa martinoywa changed the title fix convergence warnings Fix Convergence Warnings Jun 22, 2019
@martinoywa martinoywa changed the title Fix Convergence Warnings Fix Convergence Warnings On MLP Training Curves Jun 22, 2019
@martinoywa martinoywa changed the title Fix Convergence Warnings On MLP Training Curves Fixed Convergence Warnings On MLP Training Curves Jun 22, 2019
@jnothman
Copy link
Member

An example shows users how to use our library, and a ConvergenceWarning is there to protect our users. I think it's appropriate that if you ignore a warning in an example, you leave a comment explaining where those warnings come up and why you might want to ignore them in this context... I.e. say that they might rather increase iterations.

(I also don't know the internals of how these examples are executed by sphinx-gallery to determine if there is a chance this warning filter will apply globally to other examples.)

from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import MinMaxScaler
from sklearn import datasets
from sklearn.exceptions import ConvergenceWarning

warnings.filterwarnings("ignore", category=ConvergenceWarning,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a comment like

Some of the parameter combinations will not converge as can be seen on the plots so we ignore them here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me get to it

@adrinjalali
Copy link
Member

(I also don't know the internals of how these examples are executed by sphinx-gallery to determine if there is a chance this warning filter will apply globally to other examples.)

probably a with warnings.filterwarnings(...) construct would be safer?

for ax, data, name in zip(axes.ravel(), data_sets, ['iris', 'digits',
# some parameter combinations will not converge as can be seen on the
# plots so they are ignored here
with warnings.catch_warnings():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only mlp.fit needs to be inside of the context manager I think

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just seeing that now. Thanks

@martinoywa
Copy link
Contributor Author

martinoywa commented Jun 24, 2019

probably a with warnings.filterwarnings(...) construct would be safer?

There are some changes I've made, is that what you were aiming for?

print("training: %s" % label)
mlp = MLPClassifier(verbose=0, random_state=0,
max_iter=max_iter, **param)
mlp.fit(X, y)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the context manager should just be for this line

Copy link
Contributor Author

@martinoywa martinoywa Jun 24, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was putting it before the preprocessing step instead of the model fitting. Sorry for that.

@@ -64,12 +70,19 @@ def plot_on_dataset(X, y, ax, name):
print("training: %s" % label)
mlp = MLPClassifier(verbose=0, random_state=0,
max_iter=max_iter, **param)
mlp.fit(X, y)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this line has trailing white spaces but that's not really important. LGTM anyway.

thanks @martinoywa

Copy link
Member

@adrinjalali adrinjalali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @martinoywa :)

@adrinjalali adrinjalali merged commit 24d4b2c into scikit-learn:master Jun 24, 2019
koenvandevelde pushed a commit to koenvandevelde/scikit-learn that referenced this pull request Jul 12, 2019
…4144)

* fix convergence warnings

* fix convergence warnings

* PEP8

* PEP8

* Fix Convergence Warning by changing the Optimization Algorithm

* PEP8

* Fixed Future Warnings by explicitly defining n_estimators.

* PEP8

* deleted all

* Fixed Convergence Warnings

* removed changes on unrelated examples

* add comment and with statement

* PEP8

* context manager fix

* fixed indentation

* PEP8

* flake8
@martinoywa martinoywa deleted the feature branch July 28, 2019 07:31
jnothman pushed a commit to jnothman/scikit-learn that referenced this pull request Jul 29, 2019
…4144)

* fix convergence warnings

* fix convergence warnings

* PEP8

* PEP8

* Fix Convergence Warning by changing the Optimization Algorithm

* PEP8

* Fixed Future Warnings by explicitly defining n_estimators.

* PEP8

* deleted all

* Fixed Convergence Warnings

* removed changes on unrelated examples

* add comment and with statement

* PEP8

* context manager fix

* fixed indentation

* PEP8

* flake8
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants