-
Notifications
You must be signed in to change notification settings - Fork 24.9k
[simplefsdp auto-bucketing] auto bucketing with greedy algorithm #158609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: gh/ruisizhang123/7/base
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/158609
Note: Links to docs will display an error until the docs builds have been completed. ❌ 4 New Failures, 2 Unrelated FailuresAs of commit 0f5879c with merge base 05c19d1 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
…orithm" ## Greedy Algorithm Design ### Runtime estimation estimation is done in [__->__ #157572]. #### Communication estimation: - Optimization: While realize communication data into realtensor gives us good estimation of a node, it could incur significant overhead in auto-bucketing. - Calibration-based estimation: We sample 20 samples evenly in a range of [min(fwd_ag_tensor_list), 0.3* sum(fwd_ag_tensor_list)], and benchmark its communication time. This also applies to backward reduce scatter tensors. - When a new IR comes, it will search the saved communication dictionary for the closest tensor size, and use the closest tensor runtime as its predicted runtime. #### Cache estimated runtime Here, we add a cache to save pre-estimated results. Leverage a comm cache dictionary & comp cache dictionary to save pre-estimated inductor ir. If it meets a new IR node with the same key, it will skip the estimation for the new IR node. - For comm, the key is communication type, input tensor size & output tensor size. - For comp, the key is (i) generated triton code for BaseSchedulerNode/FusedSchedulerNode; (ii) Extern Kernel args ### Greedy Algorithm Implementation Core idea: the greedy algorithm decide if the node will be bucketed with the previous one based on several criteria below. The reordering will by itself reorder with the previous computation. bucketing is done in [__->__ #158097] reordering is done in [__->__ #158098] #### FWD Pass: - (i) the bucketed AG communication could be overlapped by the previous computation; - (ii) the bucketed AG copy-in/out memory doesn’t exceed peak memory; - (iii) bucketed AG communication size doesn’t exceed 0.3* sum(fwd_ag_tensor_list), such that the estimated AG communication time is always in the calibration bound. #### BWD Pass: - (i) the bucketed AG + RS communication could be overlapped by the previous computation; - (ii) the bucketed AG+RS copy-in/out memory doesn’t exceed peak memory; - (iii) RS always have future compute to overlap it, such that its final exposed communication is small; - (iv) bucketed AG/RS communication size doesn’t exceed 0.3* sum(fwd_ag_tensor_list)/0.3* sum(bwd_rs_tensor_list), such that the estimated AG/RS communication time is always in the calibration bound. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben [ghstack-poisoned]
…cket helper function" This pr is based on Diff D67292294 from yf225. Major changes are: - Change the function structure to be compatible with auto-bucketing - Group bucketed nodes & dependencies with GroupedSchedulerNodes for easier reordering. Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #160282 * #158609 * #158321 * #158098 * __->__ #158097 * #157572 cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
…tion" This pr is based on Diff D67292294 from yf225. Major changes are: - Change the function structure to be compatible with auto-bucketing - Group bucketed nodes & dependencies with GroupedSchedulerNodes for easier reordering. Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #160282 * #158609 * #158321 * #158098 * __->__ #158097 * #157572 cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov [ghstack-poisoned]
Greedy Algorithm Design
Runtime estimation
estimation is done in [-> #157572].
Communication estimation:
Cache estimated runtime
Here, we add a cache to save pre-estimated results. Leverage a comm cache dictionary & comp cache dictionary to save pre-estimated inductor ir. If it meets a new IR node with the same key, it will skip the estimation for the new IR node.
Greedy Algorithm Implementation
Core idea: the greedy algorithm decide if the node will be bucketed with the previous one based on several criteria below. The reordering will by itself reorder with the previous computation.
bucketing is done in [-> #158097]
reordering is done in [-> #158098]
FWD Pass:
BWD Pass:
Stack from ghstack (oldest at bottom):
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben