-
-
Notifications
You must be signed in to change notification settings - Fork 26.2k
Closed
Description
I was working on different modifications of gradient boosting with specific loss functions, and unfortunately I was not able to reuse scikit-learn's gradient boosting for my purposes.
My problems are not a reason to change anything in sklearn, but the following concept also resolves other issues (see below).
Loss function is an estimator
- Loss function has parameters.
For instance, this is useful for different kind or regularization - Loss function should be fitted: loss.fit(X, y, sample_weight=sample_weight)
This means, that loss function has state (and may keep some useful information), or do some heavy precomputations. - there are methods for negative gradient and for updating tree leaves and, etc.
- Since loss is estimator, it can be cloned.
Possible benefits
- Ranking algorithms implemented as loss functions
- they require some initial work with ranks, which is done during fit
- also name of special column with ranks is passed as parameter to loss function, but the ranks are obtained during fitting.
- My algorithm requires computing neighbors in some variables, this is done during fitting of loss function.
- Loss-specific regularizations as parameters.
- Flexibility: loss functions that can be reused by other algorithms (i.e. I'm using them for pruning, but one can use them to build rankers over logistic regression with modified loss function, which seems nice)
Loss function will become the main logic of algorithm, but this is probably fine, because there are different losses for different problems, which actually makes GB so universal.
Implementation
If you're interested in some details of implementation, you're welcome to see hep_ml.losses sources, there is already an example of ranking loss function.
Metadata
Metadata
Assignees
Labels
No labels