-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
DOC update and improve the sample_weight
entry in the glossary
#30564
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
sample_weight
entry in the glossary
For information, for the examples of sample weight usage, we could be more specifice, for instance:
But linking to the papers does not necessarily make it explicit that this can be implemented via scikit-learn's |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wanted to offer a few alternative word choices for consideration.
bagging models randomly resample from the training data, etc. In this | ||
case, the sample weight-repetition equivalence property described above | ||
does not hold exactly. However, it should hold at least in expectation | ||
over the randomness of the fitting procedure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note to reviewers: I think the equivalence should even hold in distribution rather than just in expectation, but I am not 100% sure this can always be enforced in scikit-learn (yet).
Could you wait for my review before merging? Even if it takes 2 weeks? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a high-level review: This PR makes sample weights the longest entry in the glossary. I would rather prefer a concise entry.
I could move some of the details to a dedicated section of the user guide (maybe part of the estimator API) and keep a short version in the glossary with a link to the user guide for more details if people prefer. |
FYI: In supervised learning, one of the main reasons for using weights, e.g. in insurance frequency models (Poisson loss), is to account for the different size/time/exposure/... of each observation. The assumptions are usually:
In the end, the weights enter the loss. The idea is that the "correct" weights improve the efficiency of the estimation of the expected loss** (=empirical mean of per observation losses). However, to my knowledge, there is no direct connection between **expected loss = statistical risk |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also mention reweighting for transfer learning, and define in one sentence what it means.
Also, I find that this paragraph is very useful, but either put it in a fold or in a section of the docs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ogrisel. I have added a few comments.
Co-authored-by: Gael Varoquaux <gael.varoquaux@normalesup.org> Co-authored-by: Stefanie Senger <91849487+StefanieSenger@users.noreply.github.com>
Thanks for the reviews @StefanieSenger @GaelVaroquaux @lorentzenchr. I addressed all the specific feedback. Personally, I am fine with keeping all this info centralized into a largish glossary entry, but I can also split it into a dedicated section in the user guide (where?) and cross-reference if people prefer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good entry, even though it is quite long.
It would be nicer to have a short glossary entry and a dedicated user guide section. But I think we'd need to put in more work to make the user guide section. For example expanding on the third party paragraph from this PR with some links to examples of third-party estimators (I'm wondering if the text as it is, is too brief for people who aren't experts to understand it and too long for those who are experts/third-party implementers).
The final paragraph on KMeans and sampling is another one that I think we could expand in the user guide. At least I am not sure I fully understand what it is trying to tell me without an example.
However, maybe we should merge this now already and make an issue to improve the user-guide and reduce the glossary entry. Mostly because making the user guide entry would be quite a bit of additional work
floats, so that sample weights are usually equivalent up to a constant | ||
positive scaling factor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
equivalent to what? I don't understand this sentence.
Some model hyper-parameters are expressed in terms of a discrete number | ||
of samples in a region of the feature space. When fitting with sample | ||
weights, a count of samples is often automatically converted to a sum | ||
of their weights as is the case for `min_samples` in | ||
:class:`cluster.DBSCAN`, for instance. However, this is not always the | ||
case. In particular, the ``min_samples_leaf`` parameter in | ||
:class:`ensemble.RandomForestClassifier` does not take weights into | ||
account. One should instead pass `min_weight_fraction_leaf` to | ||
:class:`ensemble.RandomForestClassifier` to specify the minimum sum of | ||
weights of samples in a leaf. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this information is best placed in the docstrings of those classes, not here, or here keep only the first 3-4 lines
Third-party libraries can also use `sample_weight`-compatible | ||
estimators as building blocks to reduce a specific statistical task | ||
into a weighted regression or classification task. For instance sample | ||
weights can be constructed to adjust a time-to-event model for | ||
censoring in a predictive survival analysis setting. In causal | ||
inference, it is possible to reduce a conditional average treatment | ||
effect estimation task to a weighted regression task under some | ||
assumptions. Sample weights can also be used to mitigate | ||
fairness-related harms based on a given quantitative definition of | ||
fairness. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This paragraph doesn't feel like it belongs here.
As discussed in #29907 (comment).
min_samples
inDBSCAN
and contrasted it with themin_sample_leaf
/min_weight_fraction_in_leaf
parameters pair.class_weight
parameter.