Skip to content

[simplefsdp auto-bucketing] auto bucketing with greedy algorithm #158609

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: gh/ruisizhang123/7/base
Choose a base branch
from

Conversation

ruisizhang123
Copy link
Contributor

@ruisizhang123 ruisizhang123 commented Jul 17, 2025

Greedy Algorithm Design

Runtime estimation

estimation is done in [-> #157572].

Communication estimation:

  • Optimization: While realize communication data into realtensor gives us good estimation of a node, it could incur significant overhead in auto-bucketing.
  • Calibration-based estimation: We sample 20 samples evenly in a range of [min(fwd_ag_tensor_list), 0.3* sum(fwd_ag_tensor_list)], and benchmark its communication time. This also applies to backward reduce scatter tensors.
  • When a new IR comes, it will search the saved communication dictionary for the closest tensor size, and use the closest tensor runtime as its predicted runtime.

Cache estimated runtime

Here, we add a cache to save pre-estimated results. Leverage a comm cache dictionary & comp cache dictionary to save pre-estimated inductor ir. If it meets a new IR node with the same key, it will skip the estimation for the new IR node.

  • For comm, the key is communication type, input tensor size & output tensor size.
  • For comp, the key is (i) generated triton code for BaseSchedulerNode/FusedSchedulerNode; (ii) Extern Kernel args

Greedy Algorithm Implementation

Core idea: the greedy algorithm decide if the node will be bucketed with the previous one based on several criteria below. The reordering will by itself reorder with the previous computation.

bucketing is done in [-> #158097]
reordering is done in [-> #158098]

FWD Pass:

  • (i) the bucketed AG communication could be overlapped by the previous computation;
  • (ii) the bucketed AG copy-in/out memory doesn’t exceed peak memory;
  • (iii) bucketed AG communication size doesn’t exceed 0.3* sum(fwd_ag_tensor_list), such that the estimated AG communication time is always in the calibration bound.

BWD Pass:

  • (i) the bucketed AG + RS communication could be overlapped by the previous computation;
  • (ii) the bucketed AG+RS copy-in/out memory doesn’t exceed peak memory;
  • (iii) RS always have future compute to overlap it, such that its final exposed communication is small;
  • (iv) bucketed AG/RS communication size doesn’t exceed 0.3* sum(fwd_ag_tensor_list)/0.3* sum(bwd_rs_tensor_list), such that the estimated AG/RS communication time is always in the calibration bound.

Stack from ghstack (oldest at bottom):

cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Jul 17, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/158609

Note: Links to docs will display an error until the docs builds have been completed.

❌ 4 New Failures, 9 Pending, 2 Unrelated Failures

As of commit 0f5879c with merge base 05c19d1 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Contributor

This PR needs a release notes: label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

[ghstack-poisoned]
ruisizhang123 added a commit that referenced this pull request Jul 22, 2025
[ghstack-poisoned]
ruisizhang123 added a commit that referenced this pull request Jul 22, 2025
[ghstack-poisoned]
ruisizhang123 added a commit that referenced this pull request Jul 23, 2025
@ruisizhang123 ruisizhang123 changed the title [simplefsdp auto-bucketing] auto bucketing with heuristic [simplefsdp auto-bucketing] auto bucketing with greedy algorithm Jul 30, 2025
[ghstack-poisoned]
…orithm"

## Greedy Algorithm Design

### Runtime estimation

estimation is done in  [__->__ #157572]. 

#### Communication estimation:

- Optimization: While realize communication data into realtensor gives us good estimation of a node, it could incur significant overhead in auto-bucketing.
- Calibration-based estimation: We sample 20 samples evenly in a range of [min(fwd_ag_tensor_list), 0.3* sum(fwd_ag_tensor_list)], and benchmark its communication time. This also applies to backward reduce scatter tensors.
- When a new IR comes, it will search the saved communication dictionary for the closest tensor size, and use the closest tensor runtime as its predicted runtime. 

#### Cache estimated runtime

Here, we add a cache to save pre-estimated results. Leverage a comm cache dictionary & comp cache dictionary to save pre-estimated inductor ir. If it meets a new IR node with the same key, it will skip the estimation for the new IR node.

- For comm, the key is communication type, input tensor size & output tensor size.
- For comp, the key is (i) generated triton code for BaseSchedulerNode/FusedSchedulerNode; (ii) Extern Kernel args

### Greedy Algorithm Implementation

Core idea: the greedy algorithm decide if the node will be bucketed with the previous one based on several criteria below. The reordering will by itself reorder with the previous computation.

bucketing is done in  [__->__ #158097]
reordering is done in  [__->__ #158098]

#### FWD Pass:

- (i) the bucketed AG communication could be overlapped by the previous computation;
- (ii) the bucketed AG copy-in/out memory doesn’t exceed peak memory; 
- (iii) bucketed AG communication size doesn’t exceed 0.3* sum(fwd_ag_tensor_list), such that the estimated AG communication time is always in the calibration bound.

#### BWD Pass:

- (i) the bucketed AG + RS communication could be overlapped by the previous computation; 
- (ii)  the bucketed AG+RS copy-in/out memory doesn’t exceed peak memory; 
- (iii) RS always have future compute to overlap it, such that its final exposed communication is small; 
- (iv) bucketed AG/RS communication size doesn’t exceed 0.3* sum(fwd_ag_tensor_list)/0.3* sum(bwd_rs_tensor_list), such that  the estimated AG/RS communication time is always in the calibration bound.





cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben

[ghstack-poisoned]
ruisizhang123 added a commit that referenced this pull request Aug 11, 2025
…cket helper function"

This pr is based on Diff D67292294 from yf225.

Major changes are:

- Change the function structure to be compatible with auto-bucketing
- Group bucketed nodes & dependencies with GroupedSchedulerNodes for easier reordering.

Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):

* #160282
* #158609
* #158321
* #158098
*  __->__ #158097
* #157572



cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov

[ghstack-poisoned]
ruisizhang123 added a commit that referenced this pull request Aug 11, 2025
…tion"

This pr is based on Diff D67292294 from yf225.

Major changes are:

- Change the function structure to be compatible with auto-bucketing
- Group bucketed nodes & dependencies with GroupedSchedulerNodes for easier reordering.

Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):

* #160282
* #158609
* #158321
* #158098
*  __->__ #158097
* #157572



cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov

[ghstack-poisoned]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/inductor module: inductor oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant