-
Notifications
You must be signed in to change notification settings - Fork 377
add progress callback, supress pretty_progress #170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Is there a specific reason to increase the size of many graphs and some reserved buffer sizes by 2 to 4 times? Additionally, |
There is it: #178 But not just with the controlnet models. These errors is happened with larger file sized lora models too. I went through over my "lora collection", and tested some lora models. I modified these params while larger loras started to working.
Sorry, i don't remember what was the problem there. But the new method is required, because the original ggml_n_dims paramter type was not compatible with the type TensorStorage. If you want, i will try to reproducate the original problem. If i remember correctly, the tensor_storage.n_dims was empty or corrupt. |
I reproduced the n_dims problem. If i added a lora into the prompt (hair length slider lora) the following assesrtion happened: |
I used the latest code from the master branch and did not encounter this issue. Can you try using the latest code from the master branch and see if the problem persists?
|
I have a guess about his question. In this file: https://github.com/leejet/stable-diffusion.cpp/blob/4a8190405ac32930678ce030dff6289ed680b6fc/.gitmodules#L3C44-L3C45 |
@leejet And here is again, but with a really full fresh start: In wsl it's working fine with lora too. Tested with a 256,6MB lora :) |
Build with MSVC 2019 in vscode, and started the diffusion, but got: Please see the full command without lora it's fine |
Same happening here with the auto builded release. |
Another test, with the latest. I remade my changes with lora (n_dims). Then tried to reproducate an image with embedding:
Another shot:
|
Currently, support for very large embeddings is not available. I will add it later. |
This issue is quite puzzling; I cannot replicate it in my local environment. |
I build on a "virgin" PC (avx512), the same happening. Then i downloaded the prebuild binary to the same PC (avx512), that's working fine. But the downloaded cuda version is failed too on my machine. Maybe the compiler causing this? In the CI that's an enterprise version, but i use community version. |
@leejet i givin up on this n_dims issue thing. But the 'progress callback' feature is a good feature. Do you accept the pr as-is? |
I feel like your problem seems like you didn't pull the correct git submodel. |
@Cyberhan123 i tested with a fresh start too |
Certainly, I'm willing to merge this PR, or we can merge the progress callback first if you prefer. |
Okay. I pushed in it with removed ggml_n_dims_t method. After all, the 'progress callback' was the main point of this PR. |
Thank you for your contribution. |
No description provided.