-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
[MRG+1] Add sample_weight support to RidgeClassifier #4838
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
looks good |
probably worth looking at #4490 |
LGTM |
thanks. |
[MRG+1] Add sample_weight support to RidgeClassifier
Thanks for the reviews guys ! |
@@ -487,6 +487,35 @@ def test_class_weights(): | |||
assert_array_almost_equal(clf.intercept_, clfa.intercept_) | |||
|
|||
|
|||
def test_class_weight_vs_sample_weight(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice test, but is more appropriate as a common test than one specific to Ridge, and I think it's time for there to be common tests regarding sample weight. They would best rely on:
- the ability to distinguish an estimator which accepts sample_weight from one which does not.
- something like
assert_same_model
from [WIP] Adding tests for estimators implementingpartial_fit
and a few other related fixes / enhancements #3907
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed @jnothman , nothing special to Ridge here, purely a test for classifiers with class_weight
(constructor) and sample_weight
(fit) to check equivalency and multiplicative combination. Should be able to work its way into a common test I would think. This test appears in the tree tests (though with a multi-output additional step_ and elsewhere (I think I may have pilfered or altered it from SGD originally).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd wondered whether it was copied from elsewhere. That's upsetting.
Perhaps we need an issue to track tests that should be factored into
common...
On 10 June 2015 at 14:28, Trevor Stephens notifications@github.com wrote:
In sklearn/linear_model/tests/test_ridge.py
#4838 (comment)
:@@ -487,6 +487,35 @@ def test_class_weights():
assert_array_almost_equal(clf.intercept_, clfa.intercept_)+def test_class_weight_vs_sample_weight():
Agreed @jnothman https://github.com/jnothman , nothing special to Ridge
here, purely a test for classifiers with class_weight (constructor) and
sample_weight (fit) to check equivalency and multiplicative combination.
Should be able to work its way into a common test I would think. This test
appears in the tree tests (though with a multi-output additional step_ and
elsewhere (I think I may have pilfered or altered it from SGD originally).—
Reply to this email directly or view it on GitHub
https://github.com/scikit-learn/scikit-learn/pull/4838/files#r32087333.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upsetting how? That it's propagating outside of common tests?
Best I can trace right now was @dsullivan7 's #3931 (somewhat similar test on multiplicative weights), then RF & D-Tree in #3961 .. Now here in Ridge. Cannot recall now the exact lineage I used from the RF/tree PR tests. Don't see any duplicated inline comments outside of RF/Tree/Ridge on git FWIW.
Note that in tree-based classifier's I tested for equivalency in feature_importances_
, and here it's coef_
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something similar in #4215 coming soon enough too, I imagine, for the rest of the ensemble tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that in tree-based classifier's I tested for equivalency in
feature_importances_, and here it's coef_.
Hence the use of assert_same_model
On 10 June 2015 at 14:51, Joel Nothman joel.nothman@gmail.com wrote:
Upsetting that we're duplicating code and then going to need to rein it in.
On 10 June 2015 at 14:49, Trevor Stephens notifications@github.com
wrote:In sklearn/linear_model/tests/test_ridge.py
#4838 (comment)
:@@ -487,6 +487,35 @@ def test_class_weights():
assert_array_almost_equal(clf.intercept_, clfa.intercept_)+def test_class_weight_vs_sample_weight():
Upsetting how? That it's propagating outside of common tests?
Best I can trace right now was @dsullivan7
https://github.com/dsullivan7 's #3931
#3931 (somewhat
similar test on multiplicative weights), then RF & D-Tree in #3961
#3961 .. Now here in
Ridge. Cannot recall now the exact lineage I used from the RF/tree PR
tests. Don't see any duplicated inline comments outside of RF/Tree/Ridge on
git FWIW.Note that in tree-based classifier's I tested for equivalency in
feature_importances_, and here it's coef_.—
Reply to this email directly or view it on GitHub
https://github.com/scikit-learn/scikit-learn/pull/4838/files#r32087957.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotcha
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are already common tests for class_weights, and special ones for linear classifiers: https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/estimator_checks.py#L1035 https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/estimator_checks.py#L1087
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding one that checks if sample_weights
is equivalent to doing the corresponding class_weights
would be nice, too. I feel it should be pretty straight-forward, though. Why would you need a helper? Just check if decision_function
is almost_equal
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True... as long as we don't have any classification-oriented transformers
that are not predictors that accept sample and class weights!
On 11 June 2015 at 03:46, Andreas Mueller notifications@github.com wrote:
In sklearn/linear_model/tests/test_ridge.py
#4838 (comment)
:@@ -487,6 +487,35 @@ def test_class_weights():
assert_array_almost_equal(clf.intercept_, clfa.intercept_)+def test_class_weight_vs_sample_weight():
Adding one that checks if sample_weights is equivalent to doing the
corresponding class_weights would be nice, too. I feel it should be
pretty straight-forward, though. Why would you need a helper? Just check if
decision_function is almost_equal.—
Reply to this email directly or view it on GitHub
https://github.com/scikit-learn/scikit-learn/pull/4838/files#r32145069.
Returns | ||
------- | ||
self : returns an instance of self. | ||
""" | ||
if sample_weight is None: | ||
sample_weight = 1. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you also modify this line:
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/ridge.py#L285
so that has_sw
is False
if sample_weight == 1.0
.
sample_weight
is currently implemented with a rescaling of the data. We can avoid copying the data in the case sample_weight == 1.0
.
Another option is to check for sample_weight == 1.0
directly in _rescale_data
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sample_weight == 1.0 is semantically equivalent to sample_weight.ndim == 0?
On 10 June 2015 at 16:11, Mathieu Blondel notifications@github.com wrote:
In sklearn/linear_model/ridge.py
#4838 (comment)
:Returns ------- self : returns an instance of self. """
if sample_weight is None:
sample_weight = 1.
Could you also modify this line:
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/ridge.py#L285
so that has_sw is False if sample_weight == 1.0.sample_weight is currently implemented with a rescaling of the data. We
can avoid copying the data in the case sample_weight == 1.0.Another option is to check for sample_weight == 1.0 directly in
_rescale_data.—
Reply to this email directly or view it on GitHub
https://github.com/scikit-learn/scikit-learn/pull/4838/files#r32090535.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For scalar values other than 1, sample_weight has the same effect as changing C or 1/ alpha. So this is not very useful, we could potentially raise an exception.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added #4846 to track this issue.
Added
sample_weight
support toRidgeClassifier
as it was inRidgeClassifierCV
but not the non-CV implementation.Also added some tests to both of the above to check it reacts as expected when compared to
class_weight
from the constructor.