-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
Get the number of iterations in SVR #18928
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I don't know why we don't expose Regarding GridSearch: attributes aren't stored in You can however access the attributes of the final fitted model from |
For ref: scikit-learn/sklearn/utils/estimator_checks.py Lines 2751 to 2762 in f82525e
|
Actually, we can by passing the parameter PRArtifacts |
Yup but that's for |
Whoops, I misread my bad.
…On Fri, 27 Nov 2020 at 20:36, Nicolas Hug ***@***.***> wrote:
Yup but that's for cross_validate, @Leonardbcm
<https://github.com/Leonardbcm> mentionned GridSearchCV and cv_resulst_
in particular. cross_validate might fit @Leonardbcm
<https://github.com/Leonardbcm> 's needs though
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#18928 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABY32PZ6XQ7YK3QWF2GV34LSR75URANCNFSM4UEUNKWA>
.
--
Guillaume Lemaitre
Scikit-learn @ Inria Foundation
https://glemaitre.github.io/
|
Hacky solution: add a scorer under the grid searcher's scoring parameter, defined like: def get_n_iter(est, X_test, y_test):
return est.n_iter_ |
Hi, thanks for your answers.
This is sad but it makes a lot of sense.
It also makes sense but could be optional. Metrics values are already stored and they also take
Unfortunately, no. My use-case is the following: I run my grid search on 20CPUs. Most of the iterations are taking 10s per fold to train and evaluate, while a few other are taking hours to converge. As a single model training can't be dispatched on multiple cores, the result is that I have 18 CPUs that are done and 2 that "blocks" the main script. (Does that make sense?). What I am trying to do is setting the
The problem is that Thanks for your help. |
If it is ok, I would like to work on this. Should we try to expose more attributes from the libsvm code-base? |
Exposing |
Describe the workflow you want to enable
Hi everyone,
I am manipulating
SVR
objects inGridSearcheCV
. I am able to access themean_fit_time
in thecv_results_
, but I can't access the number of iterations of the optimization problem.I would like to have this information to properly set the
max_iter
parameter of theGridSearch
.Describe your proposed solution
I have tried the following:
I am interested in getting the
#iter
field here. It should be available as a property of themodel
once fitted, and all number of iterations should appear somewhere in thecv_results_
.Also, please not that this feature should be available for all libsvm-based SVM objects:
SVC
,SVR
, etc...Additional context
I am running this code on:
Python 3.7.3
scikit-learn 0.23.1
Thanks by advance for your support.
The text was updated successfully, but these errors were encountered: