Skip to content

Update torch::stable::Tensor() default constructor #159507

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: gh/mikaylagawarecki/330/base
Choose a base branch
from

Conversation

mikaylagawarecki
Copy link
Contributor

@mikaylagawarecki mikaylagawarecki commented Jul 30, 2025

Allows things like

Tensor cu_seqlens_q;
if (...) {
   cu_seqlens_q = ...
} 
...

Also adds torch::stable::Tensor.defined()

Stack from ghstack (oldest at bottom):

Copy link

pytorch-bot bot commented Jul 30, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/159507

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 New Failures

As of commit 7eeb740 with merge base 556e2a7 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Contributor

This PR needs a release notes: label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Allows things like

```cpp
Tensor cu_seqlens_q;
if (...) {
   cu_seqlens_q = ...
} 
...
```




[ghstack-poisoned]
Copy link
Contributor

@janeyx99 janeyx99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approach looks good to me! Let's add memory tests

@@ -118,6 +124,12 @@ class Tensor {
return size;
}

bool defined() const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same semantic as the one in TensorBase.h right?

And to make sure I'm understanding correctly: an undefined tensor is an uninitialized tensor which I know has no memory but is it basically a nullptr?

Copy link
Contributor Author

@mikaylagawarecki mikaylagawarecki Aug 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes to the first

AOTITorchError aoti_torch_new_uninitialized_tensor(AtenTensorHandle* ret) {
AOTI_TORCH_CONVERT_EXCEPTION_TO_ERROR_CODE({
at::Tensor* out_tensor = new at::Tensor();
*ret = tensor_pointer_to_tensor_handle(out_tensor);
});
}

Tensor() = default;

TensorBase() = default;

My understanding is TensorBase() = default constructor would initialize its c10::intrusive_ptr<TensorImpl, UndefinedTensorImpl> impl_ to a nullptr indeed

I think there is a special case where UndefinedTensorImpl is not a nullptr but has defined its bool() method return False when defined() is called, but we're not using that here.

Allows things like

```cpp
Tensor cu_seqlens_q;
if (...) {
   cu_seqlens_q = ...
} 
...
```

Also adds `torch::stable::Tensor.defined()`




[ghstack-poisoned]
libtorch_agnostic.ops.test_default_constructor(False)

final_memory = process.memory_info().rss
memory_increase = final_memory - initial_memory
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

woah a way to test CPU memory! how confident are you that this test is testing the right thing? This looks okay to me but I am not an expert

Copy link
Contributor Author

@mikaylagawarecki mikaylagawarecki Aug 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried creating a list of tensors on CPU and the reserved set size increased, so I felt somewhat confident this was doing the right thing

cc @albanD do you know :P

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will most likely be flaky as we don't control what else might grab cpu memory.
I would recommend running this test on cuda and check for cuda memory which is very reliable for this kind of things.
Don't forget to add @serialTest() decorator to avoid cross polution of course ;)

Copy link
Contributor Author

@mikaylagawarecki mikaylagawarecki Aug 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is testing that the following code doesn't memory leak

stable::Tensor a;

I don't think we can make this use any cuda memory unfortunately, should we just remove the test? @janeyx99

Allows things like

```cpp
Tensor cu_seqlens_q;
if (...) {
   cu_seqlens_q = ...
} 
...
```

Also adds `torch::stable::Tensor.defined()`




[ghstack-poisoned]
mikaylagawarecki added a commit that referenced this pull request Aug 11, 2025
ghstack-source-id: 1155d44
Pull Request resolved: #159507
Allows things like

```cpp
Tensor cu_seqlens_q;
if (...) {
   cu_seqlens_q = ...
} 
...
```

Also adds `torch::stable::Tensor.defined()`




[ghstack-poisoned]
@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #159328

1 similar comment
@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #159328

mikaylagawarecki added a commit to mikaylagawarecki/pytorch that referenced this pull request Aug 11, 2025
@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #159328

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants