Skip to content

Model conversion issue #12941

Closed
Closed
@Eucliwoodprpr

Description

@Eucliwoodprpr

Hello, I am trying to use llama.cpp to convert my fine-tuned unsloth/DeepSeek-R1-Distill-Qwen-7B model to gguf format. However, when doing so, I encountered the same issues as during fine-tuning—I didn't get the same responses, and sometimes the output even mixed in another language. I used the following code:

python convert_hf_to_gguf.py ./DeepSeek-R1-Medical-COT-Qwen-7B --outfile ./DeepSeek-R1-Medical-COT-Qwen-7B/DeepSeek-R1-Medical-COT-Qwen-7B.gguf --outtype f16

Am I possibly using the wrong Python file? Alternatively, is there any way to verify that the conversion to gguf has been done correctly?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions