Skip to content

Unclear when hamming loss is normalized and when not #13734

@mitar

Description

@mitar

I am reading on hamming loss:

The Hamming loss is upperbounded by the subset zero-one loss. When normalized over samples, the Hamming loss is always between 0 and 1.

When does this normalization over samples occur? Isn't hamming loss always computed over all samples, not just one, when using this function, thus is always normalized?

So reading this looks like one should normalize hamming loss further, to assure to be between 0 and 1, but this does not look like it is the case? I would suggest the language here is clarified.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions