Skip to content

Release 1.0 cherry-picks #21090

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 49 commits into from
Sep 23, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
e86b143
MAINT missing what's new entry for PR-19401 (#20955)
ogrisel Sep 7, 2021
d707ef4
DOC Ensures that Birch passes numpydoc validation (#20972)
baam25simo Sep 8, 2021
3c0f0e8
DOC Ensures that GammaRegressor passes numpydoc validation (#20973)
baam25simo Sep 8, 2021
5550789
DOC Reword score_sample's and similar interfaces' docstrings (#20979)
jjerphan Sep 8, 2021
9b0ecf2
DOC Ensures that GaussianProcessRegressor passes numpydoc validation …
baam25simo Sep 9, 2021
9d5526f
DOC Ensures that GaussianRandomProjection passes numpydoc validation …
baam25simo Sep 9, 2021
d2efdc1
DOC Ensures that SelectKBest passes numpydoc validation (#20983)
jmloyola Sep 9, 2021
2c9dab3
DOC Ensures that SelectFdr passes numpydoc validation (#20984)
jmloyola Sep 9, 2021
870410e
DOC Ensures that SelectFpr passes numpydoc validation (#20985)
jmloyola Sep 9, 2021
b086529
DOC Ensures that SelectFwe passes numpydoc validation (#20986)
jmloyola Sep 9, 2021
065bca8
DOC Ensures that SelectFromModel passes numpydoc validation (#20988)
jmloyola Sep 9, 2021
f755d71
DOC Ensures that LabelBinarizer passes numpydoc validation (#20990)
genvalen Sep 9, 2021
3d8583a
DOC Ensures that Pipeline passes numpydoc validation (#20969)
jmloyola Sep 9, 2021
497bcb6
DOC Fixes build from source instructions (#21004)
thomasjpfan Sep 10, 2021
768b7a2
CI Migrates pypy3 test to Azure (#21005)
thomasjpfan Sep 10, 2021
942b996
DOC fix typo parameter name in example Isomap (#21011)
jalexand3r Sep 13, 2021
73dec98
DOC Ensures that KBinsDiscretizer passes numpydoc validation (#21016)
genvalen Sep 13, 2021
8b9a7f9
DOC Ensures that OPTICS passes numpydoc validation (#21017)
jmloyola Sep 13, 2021
83ce10d
DOC Ensures that Isomap passes numpydoc validation (#21018)
jmloyola Sep 13, 2021
f3cd2a5
DOC Ensures that RadiusNeighborsTransformer passes numpydoc validatio…
jmloyola Sep 13, 2021
ef0cd4b
DOC - Ensures that GridSearchCV passes numpydoc validation (#21003)
EricEllwanger Sep 13, 2021
89e81e2
DOC Ensures that OneVsOneClassifier passes numpydoc validation (#21013)
bharatr21 Sep 13, 2021
c11f9c0
DOC Ensures that OneVsRestClassifier passes numpydoc validation (#21014)
bharatr21 Sep 13, 2021
269f3f1
DOC add contributors to whats_new 1.0 and more fixes (#21009)
adrinjalali Sep 13, 2021
5d8804d
DOC Ensures that PowerTransformer passes numpydoc validation (#21015)
jmloyola Sep 13, 2021
7bb3613
DOC Ensures that VarianceThreshold passes numpydoc validation (#21034)
genvalen Sep 14, 2021
37948b5
DOC Ensures that SequentialFeatureSelector passes numpydoc validation…
genvalen Sep 14, 2021
e36a99e
DOC Ensures that OrdinalEncoder passes numpydoc validation (#21030)
bharatr21 Sep 14, 2021
dfc80bb
DOC more whats new 1.0 fixes (#21036)
adrinjalali Sep 14, 2021
a052448
DOC - Ensures HalvingGridSearchCV and HalvingRandomSearchCV pass nump…
EricEllwanger Sep 14, 2021
bc285cd
DOC Ensures that MDS passes numpydoc validation (#21048)
jmloyola Sep 15, 2021
44cbbd2
DOC Ensure MeanShift docstrings passes numpydoc validation (#21049)
mani2106 Sep 15, 2021
c20ab57
DOC - Ensure HashingVectorizer passes numpydoc validation (#21047)
EricEllwanger Sep 15, 2021
9f57e4b
DOC Ensures that Normalizer passes numpydoc validation (#21061)
jmloyola Sep 16, 2021
c87f5cf
DOC Add a note for some data considerations with 20newsgroups dataset…
tonygeorge1984 Sep 16, 2021
f0c0e33
DOC Ensures that QuantileTransformer passes numpydoc validation (#21065)
mani2106 Sep 16, 2021
745bde1
DOC remove incorrect sentence about dependencies being automatically …
thomasjpfan Sep 16, 2021
f3f93ff
DOC Ensure HuberRegressor passes numpydoc validation (#21062)
EricEllwanger Sep 17, 2021
9b91d42
DOC Ensure that RANSACRegressor passes numpydoc validation (#21072)
EricEllwanger Sep 17, 2021
56b61cf
DOC Ensures that OutputCodeClassifier passes numpydoc validation (#21…
jmloyola Sep 17, 2021
1b800b0
DOC Typos found by codespell (#21069)
DimitriPapadopoulos Sep 17, 2021
49899b2
DOC Ensures that SimpleImputer passes numpydoc validation (#21077)
jmloyola Sep 18, 2021
479d891
DOC minor fixes to examples for neighbors transformers (#21057)
jalexand3r Sep 18, 2021
f54a46c
DOC Add m2cgen to related projects (#20646)
StrikerRUS Sep 18, 2021
9f0a671
DOC add release highlights for 1.0 (#20980)
adrinjalali Sep 19, 2021
5918ebb
API Change ColumnTransformer parameter name to verbose_feature_names_…
thomasjpfan Sep 21, 2021
8d5938e
DOC fix verbose_feature_names_out usage in release highlights (#21100)
adrinjalali Sep 21, 2021
bebb23f
DOC Fix a few typos in release highlights (#21096)
DimitriPapadopoulos Sep 22, 2021
887009a
DOC update contributors for 1.0 (#21111)
lorentzenchr Sep 22, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 0 additions & 31 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -109,27 +109,6 @@ jobs:
name: linting
command: ./build_tools/circle/linting.sh

pypy3:
docker:
- image: condaforge/miniforge3
environment:
# Avoid the interactive dialog when installing tzdata
- DEBIAN_FRONTEND: noninteractive
steps:
- restore_cache:
keys:
- pypy3-ccache-{{ .Branch }}
- pypy3-ccache
- run: apt-get -yq update && apt-get -yq install git ssh
- checkout
- run: conda init bash && source ~/.bashrc
- run: ./build_tools/circle/build_test_pypy.sh
- save_cache:
key: pypy3-ccache-{{ .Branch }}-{{ .BuildNum }}
paths:
- ~/.ccache
- ~/.cache/pip

linux-arm64:
machine:
image: ubuntu-2004:202101-01
Expand Down Expand Up @@ -190,16 +169,6 @@ workflows:
- deploy:
requires:
- doc
pypy:
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- main
jobs:
- pypy3
linux-arm64:
jobs:
- linux-arm64
10 changes: 8 additions & 2 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -182,9 +182,9 @@ jobs:
TEST_DOCSTRINGS: 'true'
CHECK_WARNINGS: 'true'

- template: build_tools/azure/posix-32.yml
- template: build_tools/azure/posix-docker.yml
parameters:
name: Linux32
name: Linux_Docker
vmImage: ubuntu-20.04
dependsOn: [linting, git_commit]
condition: |
Expand All @@ -194,8 +194,14 @@ jobs:
ne(variables['Build.Reason'], 'Schedule')
)
matrix:
pypy3:
DISTRIB: 'conda-mamba-pypy3'
DOCKER_CONTAINER: 'condaforge/mambaforge-pypy3:4.10.3-5'
PILLOW_VERSION: 'none'
PANDAS_VERSION: 'none'
debian_atlas_32bit:
DISTRIB: 'debian-32'
DOCKER_CONTAINER: 'i386/debian:10.9'
JOBLIB_VERSION: 'min'
# disable pytest xdist due to unknown bug with 32-bit container
PYTEST_XDIST_VERSION: 'none'
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/bench_mnist.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
Benchmark on the MNIST dataset. The dataset comprises 70,000 samples
and 784 features. Here, we consider the task of predicting
10 classes - digits from 0 to 9 from their raw images. By contrast to the
covertype dataset, the feature space is homogenous.
covertype dataset, the feature space is homogeneous.

Example of output :
[..]
Expand Down
6 changes: 3 additions & 3 deletions benchmarks/bench_random_projections.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,10 @@ def compute_time(t_start, delta):
return delta.seconds + delta.microseconds / mu_second


def bench_scikit_transformer(X, transfomer):
def bench_scikit_transformer(X, transformer):
gc.collect()

clf = clone(transfomer)
clf = clone(transformer)

# start time
t_start = datetime.now()
Expand Down Expand Up @@ -195,7 +195,7 @@ def print_row(clf_type, time_fit, time_transform):
###########################################################################
n_nonzeros = int(opts.ratio_nonzeros * opts.n_features)

print("Dataset statics")
print("Dataset statistics")
print("===========================")
print("n_samples \t= %s" % opts.n_samples)
print("n_features \t= %s" % opts.n_features)
Expand Down
22 changes: 19 additions & 3 deletions build_tools/azure/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,19 @@ set -x

UNAMESTR=`uname`

if [[ "$DISTRIB" == "conda-mamba-pypy3" ]]; then
# condaforge/mambaforge-pypy3 needs compilers
apt-get -yq update
apt-get -yq install build-essential
fi

make_conda() {
TO_INSTALL="$@"
conda create -n $VIRTUALENV --yes $TO_INSTALL
if [[ "$DISTRIB" == *"mamba"* ]]; then
mamba create -n $VIRTUALENV --yes $TO_INSTALL
else
conda create -n $VIRTUALENV --yes $TO_INSTALL
fi
source activate $VIRTUALENV
}

Expand All @@ -25,15 +35,21 @@ setup_ccache() {
# imports get_dep
source build_tools/shared.sh

if [[ "$DISTRIB" == "conda" ]]; then
if [[ "$DISTRIB" == "conda" || "$DISTRIB" == *"mamba"* ]]; then

if [[ "$CONDA_CHANNEL" != "" ]]; then
TO_INSTALL="-c $CONDA_CHANNEL"
else
TO_INSTALL=""
fi

TO_INSTALL="$TO_INSTALL python=$PYTHON_VERSION ccache pip blas[build=$BLAS]"
if [[ "$DISTRIB" == *"pypy"* ]]; then
TO_INSTALL="$TO_INSTALL pypy"
else
TO_INSTALL="$TO_INSTALL python=$PYTHON_VERSION"
fi

TO_INSTALL="$TO_INSTALL ccache pip blas[build=$BLAS]"

TO_INSTALL="$TO_INSTALL $(get_dep numpy $NUMPY_VERSION)"
TO_INSTALL="$TO_INSTALL $(get_dep scipy $SCIPY_VERSION)"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,16 @@ jobs:
THREADPOOLCTL_VERSION: 'latest'
COVERAGE: 'false'
TEST_DOCSTRINGS: 'false'
BLAS: 'openblas'
# Set in azure-pipelines.yml
DISTRIB: ''
DOCKER_CONTAINER: ''
strategy:
matrix:
${{ insert }}: ${{ parameters.matrix }}

steps:
# Container is detached and sleeping, allowing steps to run commmands
# Container is detached and sleeping, allowing steps to run commands
# in the container. The TEST_DIR is mapped allowing the host to access
# the JUNITXML file
- script: >
Expand All @@ -45,7 +49,7 @@ jobs:
-w /io
--detach
--name skcontainer
-e DISTRIB=debian-32
-e DISTRIB=$DISTRIB
-e TEST_DIR=/temp_dir
-e JUNITXML=$JUNITXML
-e VIRTUALENV=testvenv
Expand All @@ -63,7 +67,8 @@ jobs:
-e OMP_NUM_THREADS=$OMP_NUM_THREADS
-e OPENBLAS_NUM_THREADS=$OPENBLAS_NUM_THREADS
-e SKLEARN_SKIP_NETWORK_TESTS=$SKLEARN_SKIP_NETWORK_TESTS
i386/debian:10.9
-e BLAS=$BLAS
$DOCKER_CONTAINER
sleep 1000000
displayName: 'Start container'
- script: >
Expand Down
2 changes: 1 addition & 1 deletion build_tools/circle/list_versions.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ def human_readable_data_quantity(quantity, multiple=1024):

def get_file_extension(version):
if "dev" in version:
# The 'dev' branch should be explictly handled
# The 'dev' branch should be explicitly handled
return "zip"

current_version = LooseVersion(version)
Expand Down
2 changes: 1 addition & 1 deletion build_tools/shared.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ get_dep() {
# do not install with none
echo
elif [[ "${version%%[^0-9.]*}" ]]; then
# version number is explicity passed
# version number is explicitly passed
echo "$package==$version"
elif [[ "$version" == "latest" ]]; then
# use latest
Expand Down
2 changes: 1 addition & 1 deletion doc/common_pitfalls.rst
Original file line number Diff line number Diff line change
Expand Up @@ -560,7 +560,7 @@ bad performance. Similarly, we want a random forest to be robust w.r.t the
set of randomly selected features that each tree will be using.

For these reasons, it is preferable to evaluate the cross-validation
preformance by letting the estimator use a different RNG on each fold. This
performance by letting the estimator use a different RNG on each fold. This
is done by passing a `RandomState` instance (or `None`) to the estimator
initialization.

Expand Down
2 changes: 1 addition & 1 deletion doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@
"release_highlights"
] = f"auto_examples/release_highlights/{latest_highlights}"

# get version from higlight name assuming highlights have the form
# get version from highlight name assuming highlights have the form
# plot_release_highlights_0_22_0
highlight_version = ".".join(latest_highlights.split("_")[-3:-1])
html_context["release_highlights_version"] = highlight_version
Expand Down
12 changes: 6 additions & 6 deletions doc/developers/advanced_installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,11 +60,12 @@ feature, code or documentation improvement).
#. Optional (but recommended): create and activate a dedicated virtualenv_
or `conda environment`_.

#. Install Cython_ and build the project with pip in :ref:`editable_mode`:
#. Install NumPy_, SciPy_, and Cython_ and build the project with pip in
:ref:`editable_mode`:

.. prompt:: bash $

pip install cython
pip install numpy scipy cython
pip install --verbose --no-build-isolation --editable .

#. Check that the installed scikit-learn has a version number ending with
Expand Down Expand Up @@ -100,9 +101,6 @@ runtime:
- Joblib (>= |JoblibMinVersion|),
- threadpoolctl (>= |ThreadpoolctlMinVersion|).

Those dependencies are **automatically installed by pip** if they were missing
when building scikit-learn from source.

.. note::

For running on PyPy, PyPy3-v5.10+, Numpy 1.14.0+, and scipy 1.1.0+
Expand Down Expand Up @@ -376,7 +374,7 @@ isolation from the Python packages installed via the system packager. When
using an isolated environment, ``pip3`` should be replaced by ``pip`` in the
above commands.

When precompiled wheels of the runtime dependencies are not avalaible for your
When precompiled wheels of the runtime dependencies are not available for your
architecture (e.g. ARM), you can install the system versions:

.. prompt:: bash $
Expand Down Expand Up @@ -436,6 +434,8 @@ the base system and these steps will not be necessary.

.. _OpenMP: https://en.wikipedia.org/wiki/OpenMP
.. _Cython: https://cython.org
.. _NumPy: https://numpy.org
.. _SciPy: https://www.scipy.org
.. _Homebrew: https://brew.sh
.. _virtualenv: https://docs.python.org/3/tutorial/venv.html
.. _conda environment: https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html
Expand Down
2 changes: 1 addition & 1 deletion doc/developers/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1004,7 +1004,7 @@ installed in your current Python environment:

asv run --python=same

It's particulary useful when you installed scikit-learn in editable mode to
It's particularly useful when you installed scikit-learn in editable mode to
avoid creating a new environment each time you run the benchmarks. By default
the results are not saved when using an existing installation. To save the
results you must specify a commit hash:
Expand Down
4 changes: 2 additions & 2 deletions doc/developers/maintainer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Before a release

- ``maint_tools/sort_whats_new.py`` can put what's new entries into
sections. It's not perfect, and requires manual checking of the changes.
If the whats new list is well curated, it may not be necessary.
If the what's new list is well curated, it may not be necessary.

- The ``maint_tools/whats_missing.sh`` script may be used to identify pull
requests that were merged but likely missing from What's New.
Expand Down Expand Up @@ -198,7 +198,7 @@ Making a release
`Continuous Integration
<https://en.wikipedia.org/wiki/Continuous_integration>`_. The CD workflow on
GitHub Actions is also used to automatically create nightly builds and
publish packages for the developement branch of scikit-learn. See
publish packages for the development branch of scikit-learn. See
:ref:`install_nightly_builds`.

4. Once all the CD jobs have completed successfully in the PR, merge it,
Expand Down
4 changes: 2 additions & 2 deletions doc/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ Installing on Apple Silicon M1 hardware

The recently introduced `macos/arm64` platform (sometimes also known as
`macos/aarch64`) requires the open source community to upgrade the build
configuation and automation to properly support it.
configuration and automation to properly support it.

At the time of writing (January 2021), the only way to get a working
installation of scikit-learn on this hardware is to install scikit-learn and its
Expand Down Expand Up @@ -204,7 +204,7 @@ It can be installed by typing the following command:
Debian/Ubuntu
-------------

The Debian/Ubuntu package is splitted in three different packages called
The Debian/Ubuntu package is split in three different packages called
``python3-sklearn`` (python modules), ``python3-sklearn-lib`` (low-level
implementations and bindings), ``python3-sklearn-doc`` (documentation).
Only the Python 3 version is available in the Debian Buster (the more recent
Expand Down
10 changes: 5 additions & 5 deletions doc/modules/compose.rst
Original file line number Diff line number Diff line change
Expand Up @@ -449,13 +449,13 @@ By default, the remaining rating columns are ignored (``remainder='drop'``)::
>>> column_trans = ColumnTransformer(
... [('categories', OneHotEncoder(dtype='int'), ['city']),
... ('title_bow', CountVectorizer(), 'title')],
... remainder='drop', prefix_feature_names_out=False)
... remainder='drop', verbose_feature_names_out=False)

>>> column_trans.fit(X)
ColumnTransformer(prefix_feature_names_out=False,
transformers=[('categories', OneHotEncoder(dtype='int'),
ColumnTransformer(transformers=[('categories', OneHotEncoder(dtype='int'),
['city']),
('title_bow', CountVectorizer(), 'title')])
('title_bow', CountVectorizer(), 'title')],
verbose_feature_names_out=False)

>>> column_trans.get_feature_names_out()
array(['city_London', 'city_Paris', 'city_Sallisaw', 'bow', 'feast',
Expand Down Expand Up @@ -573,7 +573,7 @@ many estimators. This visualization is activated by setting the

>>> from sklearn import set_config
>>> set_config(display='diagram') # doctest: +SKIP
>>> # diplays HTML representation in a jupyter context
>>> # displays HTML representation in a jupyter context
>>> column_trans # doctest: +SKIP

An example of the HTML output can be seen in the
Expand Down
2 changes: 1 addition & 1 deletion doc/modules/cross_decomposition.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ Set :math:`X_1` to :math:`X` and :math:`Y_1` to :math:`Y`. Then, for each
:math:`C = X_k^T Y_k`.
:math:`u_k` and :math:`v_k` are called the *weights*.
By definition, :math:`u_k` and :math:`v_k` are
choosen so that they maximize the covariance between the projected
chosen so that they maximize the covariance between the projected
:math:`X_k` and the projected target, that is :math:`\text{Cov}(X_k u_k,
Y_k v_k)`.
- b) Project :math:`X_k` and :math:`Y_k` on the singular vectors to obtain
Expand Down
2 changes: 1 addition & 1 deletion doc/modules/cross_validation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -974,7 +974,7 @@ test is therefore only able to show when the model reliably outperforms
random guessing.

Finally, :func:`~sklearn.model_selection.permutation_test_score` is computed
using brute force and interally fits ``(n_permutations + 1) * n_cv`` models.
using brute force and internally fits ``(n_permutations + 1) * n_cv`` models.
It is therefore only tractable with small datasets for which fitting an
individual model is very fast.

Expand Down
2 changes: 1 addition & 1 deletion doc/modules/decomposition.rst
Original file line number Diff line number Diff line change
Expand Up @@ -829,7 +829,7 @@ and the intensity of the regularization with the :attr:`alpha_W` and :attr:`alph
(:math:`\alpha_W` and :math:`\alpha_H`) parameters. The priors are scaled by the number
of samples (:math:`n\_samples`) for `H` and the number of features (:math:`n\_features`)
for `W` to keep their impact balanced with respect to one another and to the data fit
term as independant as possible of the size of the training set. Then the priors terms
term as independent as possible of the size of the training set. Then the priors terms
are:

.. math::
Expand Down
4 changes: 2 additions & 2 deletions doc/modules/lda_qda.rst
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ an estimate for the covariance matrix). Setting this parameter to a value
between these two extrema will estimate a shrunk version of the covariance
matrix.

The shrinked Ledoit and Wolf estimator of covariance may not always be the
The shrunk Ledoit and Wolf estimator of covariance may not always be the
best choice. For example if the distribution of the data
is normally distributed, the
Oracle Shrinkage Approximating estimator :class:`sklearn.covariance.OAS`
Expand Down Expand Up @@ -234,7 +234,7 @@ For QDA, the use of the SVD solver relies on the fact that the covariance
matrix :math:`\Sigma_k` is, by definition, equal to :math:`\frac{1}{n - 1}
X_k^tX_k = \frac{1}{n - 1} V S^2 V^t` where :math:`V` comes from the SVD of the (centered)
matrix: :math:`X_k = U S V^t`. It turns out that we can compute the
log-posterior above without having to explictly compute :math:`\Sigma`:
log-posterior above without having to explicitly compute :math:`\Sigma`:
computing :math:`S` and :math:`V` via the SVD of :math:`X` is enough. For
LDA, two SVDs are computed: the SVD of the centered input matrix :math:`X`
and the SVD of the class-wise mean vectors.
Expand Down
2 changes: 1 addition & 1 deletion doc/modules/model_evaluation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2381,7 +2381,7 @@ of 0.0.
A scorer object with a specific choice of ``power`` can be built by::

>>> from sklearn.metrics import d2_tweedie_score, make_scorer
>>> d2_tweedie_score_15 = make_scorer(d2_tweedie_score, pwoer=1.5)
>>> d2_tweedie_score_15 = make_scorer(d2_tweedie_score, power=1.5)

.. _pinball_loss:

Expand Down
Loading