-
Notifications
You must be signed in to change notification settings - Fork 384
Chroma support (pruned Flux model) #696
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Ok, running the Vulkan build with preview on, I can say there's something very wrong going on. Sometimes (very rarely, I got this like twice in a hundred tests) the output looks correct after the first step, then it turns to noise, then a full black image (Probably NaN/inf). About half of the time it looks like noise from the first step already and then turns black after a few steps. The rest of the time it starts off with a black image and stays like that. It's extremely inconsistent. Edit: it seems inconsistent on CPU too, but it works more often |
Ran it on cuda. edit: worth noting is that I do have d20f77f |
@Green-Sky Is it the same broken image everytime you run it with the same settings, or is it inconsistent too? |
@stduhpf Ran it on CUDA too and I got this, it's inconsistent too, I ran it 10 times and I got same thing each time |
Yeah that's odd. Why would it vary? It's supposed to be deterministic |
I tried compiling it with ubsan and asan, but
and thats the first thing that happens. looks like an upstream issue. also we should update ggml <.< Oh and using cpu backend, it crashes with Details
|
Thank you for trying that @Green-Sky . I believe it's working now. Vulkan backend, 16 steps with cfg (cfg_scale =4), so 32 forward passes without anything breaking: |
Ok it's very important to keep the distilled guidance scale to 0. For some reason the model still accepts it as an input, but it completely breaks apart if it's not zero (I double checked it with ComfyUI, it's not an issue with my code). Maybe I should just force it to zero for chroma to keep things simple? |
Go for it. Down the line we should put recommended/forced values into the gguf file. |
The q4_k is hurting it somewhat, as expected.
edit: I used v33 |
Keeping the v32:
Which begs the question: should we condsider making the convert tool "smarter", like it is in llama.cpp, with different quant types depending on the role of each tensor? |
We should. I find this model with current sd.cpp quantization incredibly hard to prompt too, but that is probably the token padding / masking. This is supposed to be q5_k quality, normal flux looks way better, even flux light looks better. |
Hi @stduhpf and @Green-Sky , Apologies for the question, but is this branch intended to work solely with vanilla master and master's version of ggml? I've been maintaining a little script for personal use that merges specific branches from here and there to get a more up-to-date build, and no matter what I do I can't get this branch to work, either on its own or with other branches. Here's the extract from my script that shows the current state of what I have generally been merging in (with a few of them commented out, as you can see, but I've included them anyway since they do work in some other combinations): BRANCHES="zhouwg/sync_with_latest_ggml"
BRANCHES="${BRANCHES} wbruna/fix_sdxl_lora"
BRANCHES="${BRANCHES} stduhpf/sdxl-embd"
BRANCHES="${BRANCHES} stduhpf/tiled-vae-encode"
BRANCHES="${BRANCHES} stduhpf/imatrix"
BRANCHES="${BRANCHES} stduhpf/lcppT5"
BRANCHES="${BRANCHES} stduhpf/unchained"
BRANCHES="${BRANCHES} stduhpf/dt"
# BRANCHES="${BRANCHES} stduhpf/diffusers"
BRANCHES="${BRANCHES} stduhpf/override-te"
BRANCHES="${BRANCHES} stduhpf/concat-controls"
# BRANCHES="${BRANCHES} stduhpf/ip2p"
BRANCHES="${BRANCHES} ImKyra/master"
# BRANCHES="${BRANCHES} Green-Sky/large_file_hardening"
# BRANCHES="${BRANCHES} rmatif/sigmas" Now, I already suspected that this branch would probably not work directly in my script, since you said, "I had to update GGML to get bf16 to work on vulkan", and that probably conflicts with Now, Chroma being a Flux derivative, I should also mention that for a while now I've not been able to get Flux to work either; the problems are generally of the form: A gajillion of these: [ERROR] model.cpp:1938 - tensor 'first_stage_model.decoder.conv_in.bias' not in model file or a gajillion of these (with the [INFO ] model.cpp:1897 - unknown tensor 'model.diffusion_model.model.diffusion_model.double_blocks.0.img_attn.norm.key_norm.scale | f8_e4m3 | 1 [128, 1, 1, 1, 1]' in model file (the doubled prefix probably indicating that something has automatically prefixed followed by: [ERROR] stable-diffusion.cpp:441 - load tensors from model loader failed From looking at And that's the point where I give up, since I don't know enough about the expected naming of the tensors. Which also brings us to the elephant in the room that you're all too polite to talk about :-) : vis-a-vis #686 , given that I'm probably not the only one with the above problems, I'm sure that nobody would take offence if a temporary (friendly, and prominently attributed) fork were created to contain suitably-approved merged and conflict-resolved branches from all the developers who have pending PRs, as well as lots of useful work that is currently being duplicated across lots of forks. |
@kgigitdev I feel you. For my use case I depend on the webserver api.
This does not seem to be in this pr.
If we ended up doing this, I would ask @ggerganov to host that project at the ggml org. |
Hi @Green-Sky , Thanks for your swift answer.
Gosh, yes, I hadn't even thought of that option. I think I had subconsciously assumed that stable-diffusion.cpp was only ever on the periphery of the llama.cpp people, because llama == serious work whereas stable diffusion == frippery and frivolity. |
That probably means the VAE is missing, or that the tensors from the VAE can't be found because their names are not the ones expected.
Yes, the doubled prefix means the tensor names are already prefixed in the model file you're using, which means you should use |
FWIW, I've been working on an option for sd.cpp to choose the quant by tensor pattern, a la llama.cpp's overridetensors: e128cfa . The conversion itself already works; could be useful for testing. |
@Green-Sky in t5.hpp, line 438 (i used |
eg two step sampling:
|
Nice catch! |
@stduhpf could please please release this in ur repo for now |
@LostRuins please add this to ur great app too |
Indeed I will, once it's merged here. stduhpf is doing very fine work. |
@Amin456789 Done. |
Well it works, sort of. Prompt is Edit: Also, I had to double this to get it to work at higher resolutions stable-diffusion.cpp/stable-diffusion.cpp Line 1554 in 10c6501
|
@LostRuins try adding a simple negative prompt. |
Ah maybe my fix to avoid including more and more padding at each diffusion step messed up the empty negative prompts. I can't check it today. |
I will be happy to test any fix you have on the same prompt and seed again. |
You could try with the penultimate commit (efaa137). The padding is technically incorrect with this one, but it doesn't seem to matter in practice. |
@stduhpf @Green-Sky Significant improvement for the case with no negative prompt I used the same seeds for both cases. Top one has no negative prompt, bottom one does (same as earlier test). Oddly, the one with negative prompt has changed too, it seems a bit worse now? Could just be variance though. |
It could be because of the incorrect padding. |
I see. I will save my settings and try again to compare after it's fixed then. Will provide a good benchmark. Thanks! |
@LostRuins Did you quantize v35 to q4_0 and convert to GGUF yourself or did you use someone else's quantized model? (I'm trying to reproduce your issue, on my machine, on the lastest commit, it looks like it using silveroxides' v29 q4_0 , with the other settings same as you, without a negative prompt) |
I used a preconverted model. You can try it here: https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v35/chroma-unlocked-v35-Q4_0.gguf T5 from Generation params: The image and both cases are reproducible by me on the same settings. |
I'm using t5xxl_q4_k.gguf though, maybe that could be the cause of the difference? Or maybe you're using flash attention? |
Ok that's a bigger difference than I expected from just chosing a different T5 quant. But it's still not looking too broken on my end... |
um, the paws in your last image are completely mashed. This is not flux quality heheh... Unless chroma itself is broken, but even Sd1.5 can generate better paws that that 😂 Can you try the last option, but first revert your stduhpf@e22c57c , keep the same seed and everything else the same, and compare the result? How does it look? |
This could just be an effect of the quantization to be honest. q4_0 is not the best for quality. |
Haha well I dunno... maybe someone else can chip in. Could indeed just be the model or quant maybe? |
@LostRuins Are you using flash attention or not? Because after looking quickly into it, it seems like flash attention only works with either causal attention masks or no mask at all, and in the case of Chroma, the mask is weirdly shaped. |
That requires |
Then I'm clueless. |
Alright, well we'll go with whatever solution you think works best. I'm just rather surprised at how wonky the chroma outputs are. Maybe it is just the model. But I think you'd agree that these are a big step down from Flux subjectively speaking. |
https://huggingface.co/lodestones/Chroma
Chroma is a Flux model with modulation layers pruned off, which makes it fit in a lower memory footprint. Unlike Flux, it doesn't use Clip-L, only t5-xxl.
Usage
Advanced usage
The following environment variables can be set to change the behavior:
--guidance 0
arg seems to break inference)(closes #690)