Description
Description
This is a feature request in allow flexibility in elastic net, which allows the user to apply separate penalties to each coefficient of the L1 term. The default value of this penalty factor is 1, meaning the regular elastic net behavior. If the penalty factor of a feature is Zero, it means it's not penalized at all, and that the user wants this feature to be always there in the model.
This feature is very useful in Bioinformatics and systems biology (that's why it is in Stanford's R package glmnet). With this feature, the user can run feature selection on a set of genes while making sure some genes stay in the system and not penalized (because of prior knowledge that they are involvid the system).
Here is glmnet documentation explaining the penalty factor. Mainly, it's controlling the selection weight on the lasso term:
https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html#lin
This feature is used in several papers, I've implemented in it in one of mine in scikit-learn. I've requests from other biologists to use this method in Python without having to recompile scikit-learn. I can do a pull request with this feature.
Papers using this feature:
- Altarawy, Doaa, Fatma-Elzahraa Eid, and Lenwood S. Heath. "PEAK: Integrating Curated and Noisy Prior Knowledge in Gene Regulatory Network Inference." Journal of Computational Biology 24.9 (2017): 863-873.
- Greenfield, Alex, Christoph Hafemeister, and Richard Bonneau. "Robust data-driven incorporation of prior knowledge into the inference of dynamic regulatory networks." Bioinformatics 29.8 (2013): 1060-1067.
- Friedman, Jerome, Trevor Hastie, and Rob Tibshirani. "Regularization paths for generalized linear models via coordinate descent." Journal of statistical software 33.1 (2010): 1.