Skip to content

some GP tests failing on py3.6 #16013

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
adrinjalali opened this issue Jan 3, 2020 · 6 comments
Closed

some GP tests failing on py3.6 #16013

adrinjalali opened this issue Jan 3, 2020 · 6 comments

Comments

@adrinjalali
Copy link
Member

The following tests are failing, when trying to have a py3.6 on the CI:

_______________________ test_gpr_interpolation[kernel4] ________________________

kernel = 1**2 * RBF(length_scale=1) + 0.00316**2

    @pytest.mark.parametrize('kernel', kernels)
    def test_gpr_interpolation(kernel):
        # Test the interpolating property for different kernels.
        gpr = GaussianProcessRegressor(kernel=kernel).fit(X, y)
        y_pred, y_cov = gpr.predict(X, return_cov=True)
    
>       assert_almost_equal(y_pred, y)
E       AssertionError: 
E       Arrays are not almost equal to 7 decimals
E       
E       (mismatch 100.0%)
E        x: array([ 1.9363794, -2.7942843, -2.4075474, -0.2947012,  3.0977077,
E               7.7698931])
E        y: array([ 0.841471 ,  0.42336  , -4.7946214, -1.676493 ,  4.5989062,
E               7.914866 ])

gpr        = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,
                         kernel=1**2 * RBF(length_scale=1) + ...      n_restarts_optimizer=0, normalize_y=False,
                         optimizer='fmin_l_bfgs_b', random_state=None)
kernel     = 1**2 * RBF(length_scale=1) + 0.00316**2
y_cov      = array([[  9.00541863e-11,   2.38742359e-11,  -7.78754838e-12,
         -1.06723519e-11,  -4.91695573e-12,   9.47864010... [  9.47864010e-12,  -1.84314786e-11,  -9.46442924e-12,
          8.85336249e-12,   3.63939989e-11,   7.31432692e-11]])
y_pred     = array([ 1.93637936, -2.79428431, -2.40754743, -0.29470122,  3.09770773,
        7.76989309])

/io/sklearn/gaussian_process/tests/test_gpr.py:53: AssertionError
_________________________ test_lml_improving[kernel3] __________________________

kernel = 1**2 * RBF(length_scale=1) + 0.00316**2

    @pytest.mark.parametrize('kernel', non_fixed_kernels)
    def test_lml_improving(kernel):
        # Test that hyperparameter-tuning improves log-marginal likelihood.
        gpr = GaussianProcessRegressor(kernel=kernel).fit(X, y)
>       assert (gpr.log_marginal_likelihood(gpr.kernel_.theta) >
                gpr.log_marginal_likelihood(kernel.theta))
E       AssertionError: assert -111269784349.14124 > -48.880110953374277
E        +  where -111269784349.14124 = <bound method GaussianProcessRegressor.log_marginal_likelihood of GaussianProcessRegressor(alpha=1e-10, copy_X_train=T...     n_restarts_optimizer=0, normalize_y=False,\n                         optimizer='fmin_l_bfgs_b', random_state=None)>(array([  4.60517019,   6.90775528, -11.51292546]))
E        +    where <bound method GaussianProcessRegressor.log_marginal_likelihood of GaussianProcessRegressor(alpha=1e-10, copy_X_train=T...     n_restarts_optimizer=0, normalize_y=False,\n                         optimizer='fmin_l_bfgs_b', random_state=None)> = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,\n                         kernel=1**2 * RBF(length_scale=1) + ...      n_restarts_optimizer=0, normalize_y=False,\n                         optimizer='fmin_l_bfgs_b', random_state=None).log_marginal_likelihood
E        +    and   array([  4.60517019,   6.90775528, -11.51292546]) = 10**2 * RBF(length_scale=1e+03) + 0.00316**2.theta
E        +      where 10**2 * RBF(length_scale=1e+03) + 0.00316**2 = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,\n                         kernel=1**2 * RBF(length_scale=1) + ...      n_restarts_optimizer=0, normalize_y=False,\n                         optimizer='fmin_l_bfgs_b', random_state=None).kernel_
E        +  and   -48.880110953374277 = <bound method GaussianProcessRegressor.log_marginal_likelihood of GaussianProcessRegressor(alpha=1e-10, copy_X_train=T...     n_restarts_optimizer=0, normalize_y=False,\n                         optimizer='fmin_l_bfgs_b', random_state=None)>(array([  0.        ,   0.        , -11.51292546]))
E        +    where <bound method GaussianProcessRegressor.log_marginal_likelihood of GaussianProcessRegressor(alpha=1e-10, copy_X_train=T...     n_restarts_optimizer=0, normalize_y=False,\n                         optimizer='fmin_l_bfgs_b', random_state=None)> = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,\n                         kernel=1**2 * RBF(length_scale=1) + ...      n_restarts_optimizer=0, normalize_y=False,\n                         optimizer='fmin_l_bfgs_b', random_state=None).log_marginal_likelihood
E        +    and   array([  0.        ,   0.        , -11.51292546]) = 1**2 * RBF(length_scale=1) + 0.00316**2.theta

gpr        = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,
                         kernel=1**2 * RBF(length_scale=1) + ...      n_restarts_optimizer=0, normalize_y=False,
                         optimizer='fmin_l_bfgs_b', random_state=None)
kernel     = 1**2 * RBF(length_scale=1) + 0.00316**2

/io/sklearn/gaussian_process/tests/test_gpr.py:75: AssertionError
_______________________ test_predict_cov_vs_std[kernel4] _______________________

kernel = 1**2 * RBF(length_scale=1) + 0.00316**2

    @pytest.mark.parametrize('kernel', kernels)
    def test_predict_cov_vs_std(kernel):
        # Test that predicted std.-dev. is consistent with cov's diagonal.
        gpr = GaussianProcessRegressor(kernel=kernel).fit(X, y)
        y_mean, y_cov = gpr.predict(X2, return_cov=True)
        y_mean, y_std = gpr.predict(X2, return_std=True)
>       assert_almost_equal(np.sqrt(np.diag(y_cov)), y_std)
E       AssertionError: 
E       Arrays are not almost equal to 7 decimals
E       
E       (mismatch 100.0%)
E        x: array([  6.5705842e-06,   6.5445791e-06,   5.8582603e-06,   5.0646414e-06,
E                6.5141087e-06])
E        y: array([ 0.078642 ,  0.0816751,  0.0748455,  0.0798408,  0.0814949])

gpr        = GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,
                         kernel=1**2 * RBF(length_scale=1) + ...      n_restarts_optimizer=0, normalize_y=False,
                         optimizer='fmin_l_bfgs_b', random_state=None)
kernel     = 1**2 * RBF(length_scale=1) + 0.00316**2
y_cov      = array([[  4.31725766e-11,   2.48689958e-11,   1.17097443e-11,
          3.24007488e-12,  -5.03064257e-12],
       [  2...89646e-11],
       [ -5.03064257e-12,  -3.79429821e-12,   9.15179044e-12,
          2.35189646e-11,   4.24336122e-11]])
y_mean     = array([-1.06857149, -3.2405798 , -1.51121271,  1.24148886,  5.27415799])
y_std      = array([ 0.07864202,  0.08167515,  0.0748455 ,  0.07984077,  0.08149491])

/io/sklearn/gaussian_process/tests/test_gpr.py:182: AssertionError
@thomasjpfan
Copy link
Member

For reference, this error appeared on 32bit linux with python 3.6.

@thomasjpfan
Copy link
Member

As an experiment, removing the bounds keyword parameter here:

def _constrained_optimization(self, obj_func, initial_theta, bounds):
if self.optimizer == "fmin_l_bfgs_b":
opt_res = scipy.optimize.minimize(
obj_func, initial_theta, method="L-BFGS-B", jac=True,
bounds=bounds)

will get the failing tests to pass. This is not a proper fix since test_solution_inside_bounds will fail.

@thomasjpfan
Copy link
Member

When using scipy==0.19.1 and numpy==1.13.1 from pypi on the same docker image, the test passes.

@AidarShakerimoff
Copy link

take

@cmarmo
Copy link
Contributor

cmarmo commented Oct 5, 2021

@adrinjalali, as 3.6 is no longer supported this one could probably be closed? Thanks.

@adrinjalali
Copy link
Member Author

yep, thanks @cmarmo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants