-
Notifications
You must be signed in to change notification settings - Fork 377
unsupported dtype 'F64' #153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Can you manually set the type to f32? |
I have the same issue with a model called revAnimated_v122 but trying to convert it to F32 or F16 it gets corrupt and takes only 809Kb Did you managed to fix it? |
Having this issue too with other models in the SD1.5 family. Could this be looked at? |
I suspect this has to do with GGML currently only having support for up to 32-bit width integer and float types in it's ggml_type. It might be possible if one is willing to accept a loss in precision to convert down to a 32-bit float, provided it does not exceed FLT_MAX, via a callback in the sdcpp load_tensors function. |
Could someone please link a model where they are having the F64 problem? I think I've put together a fix that at least seems to work with the LoRAs I have that use I64. |
https://civitai.com/models/8124?modelVersionId=87886 Look for the bigger files, like 4-5GB. they often are FP64 |
I've written a small converter program in C that re-encodes entire safetensor files that does seem to do the job, once I put in handling for those using big-endian systems I'll publish it. Having trouble getting similar logic to work a la carte at tensor loading time in sdcpp though. |
Alright here it is, feel free to give it a try. I was able to use it to convert both I64 and F64 containing models to something sdcpp could work with: |
can you add converting to fp16 too? |
It now has handling to convert down into F16 as well as BF16 using the -f, --float-out switch, although with the caveat that if your system isn't using IEEE format floats and doubles it might not get the conversion right. Don't use the --replace option unless you want to risk losing data. |
Hi! Thanks for the great tool!
When I run sd.cpp with some not pruned SD models it drops the error.
It seems to me a simple thing to fix, either by supporting double float internally or converting during load into single float.
I want to hear you thoughts on this situation and how to solve it.
With best regards, AG
The text was updated successfully, but these errors were encountered: