Ensure estimators converged in test_bayesian_mixture_fit_predict #12266
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In the test
test_bayesian_mixture_fit_predict
theBayesianGaussianMixture
estimators used to build prediction did not converge (as indicated by message).In such circumstances,
fit_predict(X)
bases its prediction on the data frome_step
on the last iteration (see base.py#L244 and use: base.py#L274).fit(X).predict(X)
uses data after them_step
of the last iterations.Hence for the test to reasonably expect the same predictions, estimations must converge.
The test failure was caught in Intel Distribution for Python, because due to changes to KMeans, the initialization of
BayesianGaussianMixture
was different.It should be possible to reproduce this failure in current master by playing with
random_state
.What does this implement/fix? Explain your changes.
The change increases
max_iter
keyword value and adds assertions for both estimators to have converged.Any other comments?
Here is the reproducer in current 0.20.0 installed from pip:
@GaelVaroquaux