-
Notifications
You must be signed in to change notification settings - Fork 24.9k
Update torch::stable::Tensor() default constructor #159507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: gh/mikaylagawarecki/330/base
Are you sure you want to change the base?
Update torch::stable::Tensor() default constructor #159507
Conversation
[ghstack-poisoned]
This PR needs a
|
Allows things like ```cpp Tensor cu_seqlens_q; if (...) { cu_seqlens_q = ... } ... ``` [ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approach looks good to me! Let's add memory tests
@@ -118,6 +124,12 @@ class Tensor { | |||
return size; | |||
} | |||
|
|||
bool defined() const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the same semantic as the one in TensorBase.h right?
And to make sure I'm understanding correctly: an undefined tensor is an uninitialized tensor which I know has no memory but is it basically a nullptr?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes to the first
pytorch/torch/csrc/inductor/aoti_torch/shim_common.cpp
Lines 759 to 764 in 5f5f508
AOTITorchError aoti_torch_new_uninitialized_tensor(AtenTensorHandle* ret) { | |
AOTI_TORCH_CONVERT_EXCEPTION_TO_ERROR_CODE({ | |
at::Tensor* out_tensor = new at::Tensor(); | |
*ret = tensor_pointer_to_tensor_handle(out_tensor); | |
}); | |
} |
pytorch/aten/src/ATen/templates/TensorBody.h
Line 103 in 5f5f508
Tensor() = default; |
pytorch/aten/src/ATen/core/TensorBase.h
Line 95 in 5f5f508
TensorBase() = default; |
My understanding is TensorBase() = default
constructor would initialize its c10::intrusive_ptr<TensorImpl, UndefinedTensorImpl> impl_
to a nullptr indeed
I think there is a special case where UndefinedTensorImpl
is not a nullptr but has defined its bool()
method return False
when defined()
is called, but we're not using that here.
test/cpp_extensions/libtorch_agnostic_extension/libtorch_agnostic/ops.py
Outdated
Show resolved
Hide resolved
test/cpp_extensions/libtorch_agnostic_extension/libtorch_agnostic/ops.py
Outdated
Show resolved
Hide resolved
test/cpp_extensions/libtorch_agnostic_extension/test/test_libtorch_agnostic.py
Outdated
Show resolved
Hide resolved
test/cpp_extensions/libtorch_agnostic_extension/test/test_libtorch_agnostic.py
Outdated
Show resolved
Hide resolved
Allows things like ```cpp Tensor cu_seqlens_q; if (...) { cu_seqlens_q = ... } ... ``` Also adds `torch::stable::Tensor.defined()` [ghstack-poisoned]
libtorch_agnostic.ops.test_default_constructor(False) | ||
|
||
final_memory = process.memory_info().rss | ||
memory_increase = final_memory - initial_memory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
woah a way to test CPU memory! how confident are you that this test is testing the right thing? This looks okay to me but I am not an expert
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried creating a list of tensors on CPU and the reserved set size increased, so I felt somewhat confident this was doing the right thing
cc @albanD do you know :P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will most likely be flaky as we don't control what else might grab cpu memory.
I would recommend running this test on cuda and check for cuda memory which is very reliable for this kind of things.
Don't forget to add @serialTest() decorator to avoid cross polution of course ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is testing that the following code doesn't memory leak
stable::Tensor a;
I don't think we can make this use any cuda memory unfortunately, should we just remove the test? @janeyx99
Allows things like ```cpp Tensor cu_seqlens_q; if (...) { cu_seqlens_q = ... } ... ``` Also adds `torch::stable::Tensor.defined()` [ghstack-poisoned]
Allows things like ```cpp Tensor cu_seqlens_q; if (...) { cu_seqlens_q = ... } ... ``` Also adds `torch::stable::Tensor.defined()` [ghstack-poisoned]
Starting merge as part of PR stack under #159328 |
1 similar comment
Starting merge as part of PR stack under #159328 |
ghstack-source-id: 869d35c Pull Request resolved: pytorch#159507
Starting merge as part of PR stack under #159328 |
Allows things like
Tensor cu_seqlens_q; if (...) { cu_seqlens_q = ... } ...
Also adds
torch::stable::Tensor.defined()
Stack from ghstack (oldest at bottom):