-
-
Notifications
You must be signed in to change notification settings - Fork 26k
MRG - ENH change lasso_path output to lars_path style #1947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MRG - ENH change lasso_path output to lars_path style #1947
Conversation
Will also have to fix conflicts and rebase 😬 |
" and alphas instead of just a list of models as previousely" | ||
" lasso_path did. `old_return` will eventually be removed in" | ||
" 0.15, after which, returning alphas and coefs will become" | ||
" the norm.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you have a more explicit name than old_return? smtg like return_models
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hey Alex.. thanks for review. I'd done most of these already but I can't seem to push commits right now (whole afternoon it's been impossible). Hopefully I'll get the your suggestions up just now..
return_models
sounds good to me. I'll make that so.
This warning is also gonna be replaced with a if(..) type warning as this way it's annoying and appears regardless of your choice of 'old_return'.. Thanks again :)
also all the calls to lasso_path in docs and functions should now use the old_return=False or the better name suggested. |
Thanks @agramfort ! I'll push your sugested changes when my connection stops it #$@% :) haha |
finally 😠 haha @agramfort you think it's necessary to change the benchmarks to Also I think I've got all the docstrings ect covered. |
Rebased on master |
Changed from WIP to MRG - waiting on Travis :) |
# Display results | ||
|
||
pl.figure(1) | ||
ax = pl.gca() | ||
ax.set_color_cycle(2 * ['b', 'r', 'g', 'c', 'k']) | ||
l1 = pl.plot(-np.log10(alphas_lasso), coefs_lasso) | ||
l2 = pl.plot(-np.log10(alphas_enet), coefs_enet, linestyle='--') | ||
l1 = pl.plot(coefs_lasso.T) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason why the 'log10(alphas)' disappeared, or is this a merge artifact (the remark applies to the whole file).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it came from the merge.. I don't recall any reason for removing it.
I'll switch it back . Thanks for pointing out 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just found another merge related bug due to this one.. fixing
Ready |
@@ -744,7 +795,7 @@ def enet_path(X, y, l1_ratio=0.5, eps=1e-3, n_alphas=100, alphas=None, | |||
if precompute == 'auto': | |||
precompute = (n_samples > n_features) | |||
|
|||
if precompute: | |||
if precompute or ((precompute == 'auto') and (n_samples > n_features)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this necessary?
you have above:
if precompute == 'auto':
precompute = (n_samples > n_features)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will fix, thanks!
to deprecate this feature it should disappear from the code base eg in: l882: models_train = path(X_train, y[train], **path_params) then you'll need to take care about the predict especially the use of the intercept if fit_intercept=True |
models = lasso_path(X, y, eps=eps) | ||
alphas_lasso = np.array([model.alpha for model in models]) | ||
coefs_lasso = np.array([model.coef_ for model in models]) | ||
alphas_lasso, coefs_lasso = lasso_path(X, y, eps, return_models=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that you want to add a comment here on the purpose of 'return_models'.
alphas_positive_enet = np.array([model.alpha for model in models]) | ||
coefs_positive_enet = np.array([model.coef_ for model in models]) | ||
alphas_positive_enet, coefs_positive_enet = enet_path(X, y, | ||
eps=eps,l1_ratio=0.8, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing whitespace after coma to be PEP8
I left a couple of very minor comments. After that, please do merge! |
to clarify my previous comment. The models are used in LinearModelCV so |
Indeed, this is blocker for the merge. Also, it will lead to |
I've addressed all the smaller comments.. however this last one is a bit of a problem.. i guess it would be a bit sloppy to leave 'return_models' in as is indefinitely.. I'm not sure which solution is better regarding changing the API and the intercepts. Since my inria report and presentation is now done after consuming my whole week, I could maybe have a look.. Any guidance appreciated :) |
I am starting with this ! |
Cool - keep me posted 👍 |
closing this in favor of #2186 |
Addresses Issue #32
Almost done..Just an example update, check some docstrings and go over tests left - ThenI'll change to MRG.
Unless I'm missing anything, I'd say this guy is ready.