-
Notifications
You must be signed in to change notification settings - Fork 377
Impossible to use LoRa #684
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Could you give a few more specific examples of those crashes? Preferably with the complete command line, so they can be reproduced? |
it isn't clear for me now, so i will do some more tests to get a better definition of the types of crash |
this one happened after loading a loRa ./sd -m "models/checkpoint/dreamshaperXL_v21TurboDPMSDE.safetensors" -p "Tall Goth girl, light skin, sunglasses, red lipstick, temple background, sword, vivid lights, high quality, realistic, realism, sfw, rating:safe, lora:PlayStation_1_SDXL:0.8" --lora-model-dir "models/lora" -n "blurry, blur, nude, naked, nsfw, lens distortion, bikini, text, watermark" --vae-tiling --vae-on-cpu --schedule karras -H 768 -W 768 --steps 8 -b 1 --seed -1
|
one consistent thing is this SD 1.5 seems very stable as long as i don't mess much with sampling method |
Indeed, none of the SDXL/LoRA combinations, I tried, worked with stablediffusion.cpp. I can confirm however that this PR fixes the issue: https://github.com/leejet/stable-diffusion.cpp/pull/658/files |
Yes, there should be an option for the user to set it during runtime instead of being hard coded |
Uh oh!
There was an error while loading. Please reload this page.
I am using the Vulkan backend because ROCm isn't an option for gfx1103
This program is very good because i don't have to deal with the python bs
but so far i only have been able to make a single LoRa model work once, i have not been able to make it work again
its completely inconsistent, its not related to size or model, it happens to SD 1.5 and SDXL
from 10 MB models to 300 MB models
i don't believe its ram because i also tried some big checkpoint models, i tried using CPU mode to only use ram i tried tweaking all the settings but LoRa's simply don't work
i believe i am doing it right
<lora:model_name:0.6>
but it just crashes, some times it gives a reason, some times it doesn't
so i don't really know what i need to do to get LoRa's working
update - its definitely not ram
so, i decided to start from 0 and made some discoveries
i was able to use lora even on SDXL models with a basic run
so it was probably some other options causing the crash
and with all my tests i found out that this program is extremely finicky
got similar results on both SD1.5 and SDXL and turbo models
sampling method can cause a crash (this one makes sense)
steps can cause a crash (tested on a turbo model with default 8 steps then 20 steps, for some reason it didn't crash with 20 steps)
lora combination, if the lora models can work together i will work perfectly, else everything crashes
for example a SD 1.5 model, some sd 1.5 models will always crash, some will work but alone, some will work with other models
so far, prompt size has not resulted in a crash, i tried a very massive prompt and it worked perfectly well
The text was updated successfully, but these errors were encountered: