Skip to content

Revert FA2 kwargs construction #40029

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

zucchini-nlp
Copy link
Member

What does this PR do?

As per title, discussed internally

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Contributor

@vasqu vasqu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if I understand the PR correctly we're moving back to prepare during generate instead of during the fa forward? 👀

@zucchini-nlp
Copy link
Member Author

So if I understand the PR correctly we're moving back to prepare during generate instead of during the fa forward? 👀

yep, to be explicit + a lil faster than computing in FA2 utils per layer

Copy link
Contributor

@vasqu vasqu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx LGTM, will merge this into #40002 then - hopefully we have a somewhat more clean fa then :D

Failing tests are mostly timeout issues, CI has issues today. Only thing is probably style which is correct failure.

@ducviet00
Copy link
Contributor

ducviet00 commented Aug 11, 2025

Morning @zucchini-nlp. I have a question here 🖐️

yep, to be explicit + a lil faster than computing in FA2 utils per layer

What about BART-based models? They use three types of attention: encoder self-attention, decoder self-attention, and decoder cross-attention, each with different fa2 kwargs. encoder self-attention can be skipped since it’s computed only once at the start, but my concern is with decoder self-attention and decoder cross-attention.

edit for more clearer: If we initialize fa kwargs during generation, can we set different kwargs for decoder self-attention and decoder cross-attention?

@vasqu
Copy link
Contributor

vasqu commented Aug 11, 2025

@ducviet00 You're right that in this form it won't work for Bart but it also doesn't support the attention_backend flag so it should not even enter the preparation path and it has the usual calls as before

It has to be changed in the future then tho, I agree!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants