Skip to content

[MRG] Adds documentation for parallelisation of custom scorer #12813

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Jan 15, 2019
19 changes: 19 additions & 0 deletions doc/modules/model_evaluation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -215,6 +215,25 @@ the following two rules:
Again, by convention higher numbers are better, so if your scorer
returns loss, that value should be negated.

.. note:: **Using custom scorers in functions where n_jobs > 1**

While defining the custom scoring function alongside the calling function
should work out of the box with the default joblib backend (loky),
importing it from another module will be a more robust approach and work
independently of the joblib backend.

For example, to use, ``n_jobs`` greater than 1 in the example below,
``custom_scoring_function`` function is saved in a user-created module
(``custom_scorer_module.py``) and imported::

>>> from custom_scorer_module import custom_scoring_function # doctest: +SKIP
>>> cross_val_score(model,
... X_train,
... y_train,
... scoring=make_scorer(custom_scoring_function, greater_is_better=False),
... cv=5,
... n_jobs=-1) # doctest: +SKIP

.. _multimetric_scoring:

Using multiple metric evaluation
Expand Down