-
-
Notifications
You must be signed in to change notification settings - Fork 26.2k
Closed
Description
I am reading on hamming loss:
The Hamming loss is upperbounded by the subset zero-one loss. When normalized over samples, the Hamming loss is always between 0 and 1.
When does this normalization over samples occur? Isn't hamming loss always computed over all samples, not just one, when using this function, thus is always normalized?
So reading this looks like one should normalize hamming loss further, to assure to be between 0 and 1, but this does not look like it is the case? I would suggest the language here is clarified.
Metadata
Metadata
Assignees
Labels
No labels