Skip to content

Added IsInf layer to new DNN engine #27660

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: 5.x
Choose a base branch
from

Conversation

abhishek-gola
Copy link
Contributor

Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

  • I agree to contribute to the project under Apache 2 License.
  • To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
  • The PR is proposed to the proper branch
  • There is a reference to the original bug report and related work
  • There is accuracy test, performance test and test data in opencv_extra repository, if applicable
    Patch to opencv_extra has the same branch name.
  • The feature is well documented and sample code can be built with the project CMake

@asmorkalov asmorkalov added the category: dnn (onnx) ONNX suport issues in DNN module label Aug 12, 2025
@asmorkalov asmorkalov added this to the 5.0-release milestone Aug 12, 2025
// Copyright (C) 2025, BigVision LLC, all rights reserved.
// Third party copyrights are property of their respective owners.

#include "../precomp.hpp"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please, put the link to description of the operation at onnx.ai and also specify, which opsets are supported.

for (size_t i = 0; i < count; ++i)
{
const T v = src[i];
const bool pos = cvIsInf(v) && (v > 0) && detectPositive;
Copy link
Contributor

@vpisarev vpisarev Aug 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would split this into different loops:

if (detectPositive && detectNegative) {
   for (...) ... // just check for cvIsInf()
} else if (detectPositive) {
   for (...) ... // cvIsInf(v) && v > 0
} else if (detectNegative) {
   for (...) ... // cvIsInf(v) && v < 0
} else {
   // report error?
}

template <typename T>
static inline void computeIsInfMask(const T* src, uchar* dst, const size_t count, const bool detectPositive, const bool detectNegative)
{
for (size_t i = 0; i < count; ++i)
Copy link
Contributor

@vpisarev vpisarev Aug 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be nice to make this loop parallel just to eliminate possible bottlenecks when everything else in the model graph is parallel


switch (depth) {
case CV_32F: computeIsInfMask<float>(X.ptr<float>(), dst, total, detect_pos, detect_neg); break;
case CV_64F: computeIsInfMask<double>(X.ptr<double>(), dst, total, detect_pos, detect_neg); break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bfloat and hfloat should be supported as well. 1) Add optional WT=T parameter to computeIsInfMask. 2) Change T v = src[i]; to WT v = WT(src[i]);. 3) use WT=float for hfloat and bfloat

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: dnn (onnx) ONNX suport issues in DNN module feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants