Skip to content

[MRG] Update roadmap #13809

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 9, 2019
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 0 additions & 19 deletions doc/roadmap.rst
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,6 @@ bottom.

#. Improved tools for model diagnostics and basic inference

* partial dependence plots :issue:`5653`
* alternative feature importances implementations (e.g. methods or wrappers)
* better ways to handle validation sets when fitting
* better ways to find thresholds / create decision rules :issue:`8614`
Expand All @@ -144,19 +143,6 @@ bottom.
:issue:`6929`
* Callbacks or a similar system would facilitate logging and early stopping

#. Use scipy BLAS Cython bindings

* This will make it possible to get rid of our partial copy of suboptimal
Atlas C-routines. :issue:`11638`
* This should speed up the Windows and Linux wheels

#. Allow fine-grained parallelism in cython

* Now that we do not use fork-based multiprocessing in joblib anymore it's
possible to use the prange / openmp thread management which makes it
possible to have very efficient thread-based parallelism at the Cython
level. Example with K-Means: :issue:`11950`

#. Distributed parallelism

* Joblib can now plug onto several backends, some of them can distribute the
Expand Down Expand Up @@ -240,9 +226,6 @@ Subpackage-specific goals
:mod:`sklearn.ensemble`

* a stacking implementation
* a binned feature histogram based and thread parallel implementation of
decision trees to compete with the performance of state of the art gradient
boosting like LightGBM.

:mod:`sklearn.model_selection`

Expand All @@ -269,5 +252,3 @@ Subpackage-specific goals

* Performance issues with `Pipeline.memory`
* see "Everything in Scikit-learn should conform to our API contract" above
* Add a verbose option :issue:`10435`