-
-
Notifications
You must be signed in to change notification settings - Fork 56.2k
G-API: Use different devices processing in streaming pipeline #21716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
9ecc0a6
to
54552f2
Compare
accel_ctx.value()); | ||
std::cout << "enforce VPP preprocessing on " << device_id << std::endl; | ||
// Turn on VPP PreprocesingEngine if available & requested | ||
if (flow_settings->ie_preproc_enable) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ie_preproc_enable
name can be misleading here, default preprocessing in IE is also ie_preproc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok,
ie_preproc_enable
-> vpl_preproc_enable
is suitable?
|
||
// NB: consider NV12 surface because it's one of native GPU image format | ||
face_net.pluginConfig({{"GPU_NV12_TWO_INPUTS", "YES" }}); | ||
std::cout << "enfore InferenceEngine NV12 blob" << std::endl; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enforce
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
face_net.cfgContextParams(ctx_config); | ||
std::cout << "enfore InferenceEngine remote context on device: " << device_id << std::endl; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enforce
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks
@@ -712,6 +713,10 @@ inline IE::Blob::Ptr extractRemoteBlob(IECallContext& ctx, std::size_t i, | |||
cv::MediaFrame frame = ctx.inFrame(i); | |||
if (ctx.uu.preproc_engine_impl) { | |||
GAPI_LOG_DEBUG(nullptr, "Try to use preprocessing for decoded remote frame in remote ctx"); | |||
|
|||
//TODO | |||
frame.blobParams(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Todo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, looks like there is some exception will be thrown if there is a CPU adapter inside
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I will change this block a little bit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
extractRemoteBlob has gone away at all. please take a look
// - pass such wrappers as constructor arguments for each component in pipeline: | ||
// a) use special constructor for `onevpl::GSource` | ||
// b) use `cfgContextParams` method of `cv::gapi::ie::Params` to charge PreprocesingEngine | ||
// c) use `InferenceEngine::ParamMap` to activate remote ctx in Inference Engine |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like it will be very nice to have an opportunity to configure everything in one place (set device/context only once) if user want everything to run on GPU for example. But can't find a way how to make it, at first glance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, that's because we discussed DeviceSelector approach many times before
Feature implemented here #22212 Closed |
Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
Patch to opencv_extra has the same branch name.
Build Configuration