Skip to content

fix: avoid crash on sdxl loras #658

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

wbruna
Copy link

@wbruna wbruna commented Apr 15, 2025

Some SDXL LoRAs (eg. PCM) can exceed 12k nodes.

Also tested with DMD2 4 step:

stable-diffusion.cpp/ggml/src/ggml.c:5764: GGML_ASSERT(cgraph->n_nodes < cgraph->size) failed
(...)
#12 0x000055769307cfa3 in ggml_build_forward_expand ()
#13 0x0000557692f0ff56 in LoraModel::build_lora_graph(std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, ggml_tensor*, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, ggml_tensor*> > >, SDVersion) ()

Before the assertion, the LoRA graph reaches 12608 nodes.

Some SDXL LoRAs (eg. PCM) can exceed 12k nodes.
@LostRuins
Copy link
Contributor

Related: 1be2491#r155465406

@DelusionalLogic
Copy link

I ran into the problem too, and increasing the node limit didn't seem to have any ill effects (other than the increased memory usage i guess). I can't help but wonder if there's not some way to calculate the required buffer size based on the input instead of leaving it as a static define.

@tmathews
Copy link

A couple ideas without looking at the code:

  • Make it configurable from input
  • Double the size if current size has reached the limit

I also agree that computing it from input would be a good idea too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants