Skip to content

Commit 3cdeb44

Browse files
XavierSATTLERjnothman
authored andcommitted
DOC clarified hamming loss docstrings (scikit-learn#13760)
1 parent 03eb27e commit 3cdeb44

File tree

1 file changed

+7
-5
lines changed

1 file changed

+7
-5
lines changed

sklearn/metrics/classification.py

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1989,16 +1989,18 @@ def hamming_loss(y_true, y_pred, labels=None, sample_weight=None):
19891989
-----
19901990
In multiclass classification, the Hamming loss corresponds to the Hamming
19911991
distance between ``y_true`` and ``y_pred`` which is equivalent to the
1992-
subset ``zero_one_loss`` function.
1992+
subset ``zero_one_loss`` function, when `normalize` parameter is set to
1993+
True.
19931994
19941995
In multilabel classification, the Hamming loss is different from the
19951996
subset zero-one loss. The zero-one loss considers the entire set of labels
19961997
for a given sample incorrect if it does not entirely match the true set of
1997-
labels. Hamming loss is more forgiving in that it penalizes the individual
1998-
labels.
1998+
labels. Hamming loss is more forgiving in that it penalizes only the
1999+
individual labels.
19992000
2000-
The Hamming loss is upperbounded by the subset zero-one loss. When
2001-
normalized over samples, the Hamming loss is always between 0 and 1.
2001+
The Hamming loss is upperbounded by the subset zero-one loss, when
2002+
`normalize` parameter is set to True. It is always between 0 and 1,
2003+
lower being better.
20022004
20032005
References
20042006
----------

0 commit comments

Comments
 (0)