-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
Add balanced_accuracy_score metrics #3506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I'm trying to work on this. I have a little question, however. By the way, sensitive is by definition TP / (TP + FN). I'm wondering what if TP + FN = 0? I'm not really familiar with the terms TP, FN etc. So please correct me if I make any mistake. Thanks. |
Perhaps it should just support the binary case at first. On 31 July 2014 03:38, Bert Chang notifications@github.com wrote:
|
TP, TN, FP and FN are only well-defined when one class is considered negative (which is quite common). Usually we pick the class that is smallest according to Python's standard ordering unless overridden by the user. |
@lazywei , did you get a chance to start on this? If not, I may take a stab at it. |
@adam-m-mcelhinney |
Got it. Thanks. |
@ppuggioni you have worked on this during the sprint: can you please open a PR with a |
Indeed, please submit yours and cross reference the 2 PR to compare the results. |
Hi all, May I take over this issue? It seems that the referenced two PRs have stalled. I have done coding, documentation, and testing. I found the balanced accuracy score is equal to the average of positive label recall and negative label recall. I did not take the balance weight into account. Since in the wikipedia, it is fixed as 0.5. So my code is quite simple. May I open a [WIP]PR? @ogrisel @jnothman I am new to scikit-learn development, but I have used the package for a long time. I am a PhD student studying in machine learning. I hope I can join in GSOC 2015. Thanks, |
I haven't touched it. I believe someone else had finished it however. On Wed, Feb 25, 2015 at 4:20 PM, Wei Xue notifications@github.com wrote:
|
hi, @adam-m-mcelhinney, Searching keywords |
Looks like its all yours then. Let me know if you want to collaborate on On Wed, Feb 25, 2015 at 5:09 PM, Wei Xue notifications@github.com wrote:
|
Closed by #8066. |
There have been some discussion about adding a balanced accuracy metrics (see this article for the definition) on the mailing list. This is a good opportunity for a first contribution.
This implies coding the function, checking correctness through tests and highlight your work with documentations. In order to be easily used by many, a balanced accuracy scorer is a good idea. As a bonus, it could also support
sample_weight
.The text was updated successfully, but these errors were encountered: