Skip to content

Conversation

sandeepvaday
Copy link

Reference Issues/PRs

Please see Issue #10391

What does this implement/fix? Explain your changes.

As per the discussion, instead of adding a False Positive Rate metric, adding a Specificity score metric.

Any other comments?

@sandeepvaday
Copy link
Author

@jnothman FYI.

Copy link
Member

@jnothman jnothman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to either not support multiclass or document the multiclass behaviour better. Is this a standard definition for the multiclass case? The binary case can also be calculated with recall_score, though.

@sandeepvaday
Copy link
Author

How will we calculate from the recall_score? Perhaps you are referring to Sensitivity, and not Specificity as I aim to compute here?

In multiclass cases, it is often a requirement to compute the False Positive Rates for each class to evaluate the model. Here, by computing and returning the Specificity for each class, we can allow the user to refer to either individual values, or compute the macro/micro average from the returned array.

If you would like, I can document the multiclass stating the above idea or with more examples?

@jnothman
Copy link
Member

jnothman commented Mar 19, 2018 via email

Copy link
Member

@jnothman jnothman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also add specificity_score to sklearn/metrics/tests/test_common.py.

@jnothman
Copy link
Member

You might also want to see #10628 which may make the implementation of specificity_score less necessary, or at least simplify the implementation, by providing multilabel_confusion_matrix.

@jnothman
Copy link
Member

And your tests are currently failing.

@sandeepvaday
Copy link
Author

Well the multilabel_confusion_matrix does make my contribution trivial. I guess I should close my pull request and not bother about the failing tests either?

Let me know.

@jnothman
Copy link
Member

jnothman commented Mar 19, 2018 via email

@amueller amueller added Needs Benchmarks A tag for the issues and PRs which require some benchmarks Stalled labels Aug 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Needs Benchmarks A tag for the issues and PRs which require some benchmarks Stalled
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants