Skip to content

Commit b1fd26f

Browse files
pytorch xpu should be flash or mem efficient attention?
1 parent 20447e9 commit b1fd26f

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

comfy/model_management.py

+2
Original file line numberDiff line numberDiff line change
@@ -693,6 +693,8 @@ def pytorch_attention_flash_attention():
693693
#TODO: more reliable way of checking for flash attention?
694694
if is_nvidia(): #pytorch flash attention only works on Nvidia
695695
return True
696+
if is_intel_xpu():
697+
return True
696698
return False
697699

698700
def force_upcast_attention_dtype():

0 commit comments

Comments
 (0)