Closed
Description
When I use the following command:
sd.exe -m ..\models\sd_xl_base_1.0.safetensors --vae ..\models\sdxl_vae.safetensors --lora-model-dir ..\models -H 1024 -W 1024 -p "a lovely cat<lora:sd_xl_offset_example-lora_1.0:1>" -v
The lora model apparently can not be used:
[INFO ] model.cpp:705 - load ..\models/sd_xl_offset_example-lora_1.0.safetensors using safetensors format
[DEBUG] model.cpp:771 - init from '..\models/sd_xl_offset_example-lora_1.0.safetensors'
[INFO ] lora.hpp:38 - loading LoRA from '..\models/sd_xl_offset_example-lora_1.0.safetensors'
[DEBUG] model.cpp:1343 - loading tensors from ..\models/sd_xl_offset_example-lora_1.0.safetensors
[DEBUG] ggml_extend.hpp:884 - lora params backend buffer size = 47.01 MB(VRAM) (10240 tensors)
[DEBUG] model.cpp:1343 - loading tensors from ..\models/sd_xl_offset_example-lora_1.0.safetensors
[DEBUG] lora.hpp:74 - finished loaded lora
[WARN ] lora.hpp:160 - unused lora tensor lora.unet_input_blocks_1_0_emb_layers_1.alpha
[WARN ] lora.hpp:160 - unused lora tensor lora.unet_input_blocks_1_0_emb_layers_1.lora_down.weight
[WARN ] lora.hpp:160 - unused lora tensor lora.unet_input_blocks_1_0_emb_layers_1.lora_up.weight
[WARN ] lora.hpp:160 - unused lora tensor lora.unet_input_blocks_1_0_in_layers_2.alpha
[WARN ] lora.hpp:160 - unused lora tensor lora.unet_input_blocks_1_0_in_layers_2.lora_down.weight
[WARN ] lora.hpp:160 - unused lora tensor lora.unet_input_blocks_1_0_in_layers_2.lora_up.weight
...
(hundreds of same warnings)
It's the same problem as #117 (comment).
The lora model is eventually not used at all.
I'm using the latest master-a469688 release.
Metadata
Metadata
Assignees
Labels
No labels
Activity
grauho commentedon Mar 16, 2024
I had the same issue and addressed it in my pending pull request #200
From what I can tell it is because the SDXL LoRAs use a slightly different naming convention that the current code isn't set up to properly convert to the internally used convention. Also, it seems like the existing memory allocated for the GGML graph is insufficient to accommodate adding a SDXL LoRA so I had to bump that up as well.
Green-Sky commentedon Mar 20, 2024
Now it is crashing with
probably because loras contains some f32 AND
q8_0+f32->q8_0
is not supported. conversion seems to work but its not loading it (only looking for safetensors and ckpt)grauho commentedon Mar 20, 2024
Interesting, I'm not having that problem, what invocation are you using?
Green-Sky commentedon Mar 20, 2024
forgot to mention the obvious: I am using a model that is converted to q8_0. Maybe you can reproduce using
--type q8_0
.I did not test, but I think this is not SDXL specific and was always like this.
grauho commentedon Mar 20, 2024
I assumed as much, and this only happens when you use a quantized LoRA and not just the quantized model, or both?
grauho commentedon Mar 20, 2024
I am able to use --type q8_0 on an SDXL model and SDXL LoRA without incident
Green-Sky commentedon Mar 20, 2024
the model is always quantized. Lora cant be quantized rn.
sad, I thought I could have memory savings and deleted the .safetensors models <.<
grauho commentedon Mar 20, 2024
Alright, in that case it sounds like a separate quantization issue distinct from this one. I propose that this issue be marked as resolved.
piallai commentedon Mar 22, 2024
I tried with the new release master-48bcce4.
The program now crashes when loading LoRa. Here is the log, with the same command as in the OP.
With
n_dims
valued 0.Not exactly the same problem, but still related to LoRa loading. I suppose it's worth leaving the issue open.
grauho commentedon Mar 22, 2024
In my opinion, it might better to close this issue and make that new problem it's own issue with a more descriptive name so that other people having the same issue or those with a solution can find it more easily as it does not seem to be related to issue in the original post. Just to avoid those reading this issue never scrolling down and seeing that someone is in fact having the same issue they are.
bssrdf commentedon Mar 22, 2024
I am still seeing some lora's not being applied even with fix from #200
The lora wight file is
xl_more_art-full_v1.safetensors
, a very popular one.grauho commentedon Mar 22, 2024
Have you verified that the corresponding tensor exists in the model you are using?
bssrdf commentedon Mar 22, 2024
I have used this model file a while ago and it had no issue unless UNET changed since then (unlikely). I am wondering if this is due to the change introduced with PhotoMaker PR #179 . @leejet did a nice job of consolidating vanilla Lora and Photomaker Lora.
grauho commentedon Mar 22, 2024
I'm not familiar with anything to do with photomaker but I would recommend checking out the model to make sure that the corresponding tensor is in fact present, as just because it didn't warn you of this before doesn't mean it wasn't an issue.
bssrdf commentedon Mar 22, 2024
I found commenting out these lines will fix my issue but I assume it will not work for other models.
stable-diffusion.cpp/lora.hpp
Lines 93 to 95 in 48bcce4
I am using:
6 remaining items