-
Notifications
You must be signed in to change notification settings - Fork 12k
mtmd : fix memory leak in mtmd_helper_eval_chunk_single #13961
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -311,6 +311,7 @@ int32_t mtmd_helper_eval_chunk_single(mtmd_context * ctx, | |||
GGML_ABORT("chunk type not supported"); | |||
} | |||
|
|||
llama_batch_free(text_batch); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW It seems to fix the following direct leaks
Direct leak of 49176 byte(s) in 3 object(s) allocated from:
#0 0x0000004abdc8 in malloc (/llama.cpp/build/bin/llama-mtmd-cli+0x4abdc8) (BuildId: d0529438afdc212f2f3e006e74cba8cc230cfcc6)
#1 0x7fcb42e3a64b in llama_batch_init /llama.cpp/src/llama-batch.cpp:358:40
#2 0x7fcb4337b518 in mtmd_helper_eval_chunk_single /llama.cpp/tools/mtmd/mtmd-helper.cpp:255:30
#3 0x7fcb4337bcab in mtmd_helper_eval_chunks /llama.cpp/tools/mtmd/mtmd-helper.cpp:335:23
#4 0x0000004f3aa6 in eval_message(mtmd_cli_context&, common_chat_msg&, bool) /llama.cpp/tools/mtmd/mtmd-cli.cpp:222:9
#5 0x0000004f22fc in main /llama.cpp/tools/mtmd/mtmd-cli.cpp:300:13
#6 0x7fcb425a95f4 in __libc_start_call_main (/lib64/libc.so.6+0x35f4) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#7 0x7fcb425a96a7 in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x36a7) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#8 0x0000004082b4 in _start (/llama.cpp/build/bin/llama-mtmd-cli+0x4082b4) (BuildId: d0529438afdc212f2f3e006e74cba8cc230cfcc6)
...
but with this PR applied ASan still reports memory leaks in llama-mtmd-cli
:
SUMMARY: AddressSanitizer: 654872 byte(s) leaked in 2096 allocation(s).
Direct leak of 16392 byte(s) in 1 object(s) allocated from:
#0 0x0000004abdc8 in malloc (/llama.cpp/build/bin/llama-mtmd-cli+0x4abdc8) (BuildId: 693cd788cb564003831ab992f5b80d129be780e8)
#1 0x7f748b1d464b in llama_batch_init /llama.cpp/src/llama-batch.cpp:358:40
#2 0x0000004f8d9e in mtmd_cli_context::mtmd_cli_context(common_params&) /llama.cpp/tools/mtmd/mtmd-cli.cpp:93:17
#3 0x0000004f1ea1 in main /llama.cpp/tools/mtmd/mtmd-cli.cpp:259:22
#4 0x7f748a9435f4 in __libc_start_call_main (/lib64/libc.so.6+0x35f4) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#5 0x7f748a9436a7 in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x36a7) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#6 0x0000004082b4 in _start (/llama.cpp/build/bin/llama-mtmd-cli+0x4082b4) (BuildId: 693cd788cb564003831ab992f5b80d129be780e8)
Direct leak of 8192 byte(s) in 1 object(s) allocated from:
#0 0x0000004abdc8 in malloc (/llama.cpp/build/bin/llama-mtmd-cli+0x4abdc8) (BuildId: 693cd788cb564003831ab992f5b80d129be780e8)
#1 0x7f748b1d461e in llama_batch_init /llama.cpp/src/llama-batch.cpp:357:40
#2 0x0000004f8d9e in mtmd_cli_context::mtmd_cli_context(common_params&) /llama.cpp/tools/mtmd/mtmd-cli.cpp:93:17
#3 0x0000004f1ea1 in main /llama.cpp/tools/mtmd/mtmd-cli.cpp:259:22
#4 0x7f748a9435f4 in __libc_start_call_main (/lib64/libc.so.6+0x35f4) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#5 0x7f748a9436a7 in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x36a7) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#6 0x0000004082b4 in _start (/llama.cpp/build/bin/llama-mtmd-cli+0x4082b4) (BuildId: 693cd788cb564003831ab992f5b80d129be780e8)
Direct leak of 8192 byte(s) in 1 object(s) allocated from:
#0 0x0000004abdc8 in malloc (/llama.cpp/build/bin/llama-mtmd-cli+0x4abdc8) (BuildId: 693cd788cb564003831ab992f5b80d129be780e8)
#1 0x7f748b1d45fa in llama_batch_init /llama.cpp/src/llama-batch.cpp:356:40
#2 0x0000004f8d9e in mtmd_cli_context::mtmd_cli_context(common_params&) /llama.cpp/tools/mtmd/mtmd-cli.cpp:93:17
#3 0x0000004f1ea1 in main /llama.cpp/tools/mtmd/mtmd-cli.cpp:259:22
#4 0x7f748a9435f4 in __libc_start_call_main (/lib64/libc.so.6+0x35f4) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#5 0x7f748a9436a7 in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x36a7) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#6 0x0000004082b4 in _start (/llama.cpp/build/bin/llama-mtmd-cli+0x4082b4) (BuildId: 693cd788cb564003831ab992f5b80d129be780e8)
Direct leak of 8192 byte(s) in 1 object(s) allocated from:
#0 0x0000004abdc8 in malloc (/llama.cpp/build/bin/llama-mtmd-cli+0x4abdc8) (BuildId: 693cd788cb564003831ab992f5b80d129be780e8)
#1 0x7f748b1d45cb in llama_batch_init /llama.cpp/src/llama-batch.cpp
#2 0x0000004f8d9e in mtmd_cli_context::mtmd_cli_context(common_params&) /llama.cpp/tools/mtmd/mtmd-cli.cpp:93:17
#3 0x0000004f1ea1 in main /llama.cpp/tools/mtmd/mtmd-cli.cpp:259:22
#4 0x7f748a9435f4 in __libc_start_call_main (/lib64/libc.so.6+0x35f4) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#5 0x7f748a9436a7 in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x36a7) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#6 0x0000004082b4 in _start (/llama.cpp/build/bin/llama-mtmd-cli+0x4082b4) (BuildId: 693cd788cb564003831ab992f5b80d129be780e8)
Direct leak of 2048 byte(s) in 1 object(s) allocated from:
#0 0x0000004abdc8 in malloc (/llama.cpp/build/bin/llama-mtmd-cli+0x4abdc8) (BuildId: 693cd788cb564003831ab992f5b80d129be780e8)
#1 0x7f748b1d46e0 in llama_batch_init /llama.cpp/src/llama-batch.cpp:364:40
#2 0x0000004f8d9e in mtmd_cli_context::mtmd_cli_context(common_params&) /llama.cpp/tools/mtmd/mtmd-cli.cpp:93:17
#3 0x0000004f1ea1 in main /llama.cpp/tools/mtmd/mtmd-cli.cpp:259:22
#4 0x7f748a9435f4 in __libc_start_call_main (/lib64/libc.so.6+0x35f4) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#5 0x7f748a9436a7 in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x36a7) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#6 0x0000004082b4 in _start (/llama.cpp/build/bin/llama-mtmd-cli+0x4082b4) (BuildId: 693cd788cb564003831ab992f5b80d129be780e8)
Direct leak of 416 byte(s) in 1 object(s) allocated from:
#0 0x0000004f07c1 in operator new(unsigned long) (/llama.cpp/build/bin/llama-mtmd-cli+0x4f07c1) (BuildId: 693cd788cb564003831ab992f5b80d129be780e8)
#1 0x0000007f6a4f in common_sampler_init(llama_model const*, common_params_sampling const&) /llama.cpp/common/sampling.cpp:216:21
#2 0x0000004f1fba in main /llama.cpp/tools/mtmd/mtmd-cli.cpp:264:36
#3 0x7f748a9435f4 in __libc_start_call_main (/lib64/libc.so.6+0x35f4) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#4 0x7f748a9436a7 in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x36a7) (BuildId: 2b3c02fe7e4d3811767175b6f323692a10a4e116)
#5 0x0000004082b4 in _start (/llama.cpp/build/bin/llama-mtmd-cli+0x4082b4) (BuildId: 693cd788cb564003831ab992f5b80d129be780e8)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be fixed in 6aaefb7
Could you give it a try?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like ASan no longer complains. Thanks!
FWIW, adding this line does fix the leak in my project without any noticeable side effects. |
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Fix #13958
Maybe we should have a C++ wrapper in
llama-cpp.h
forllama_batch
?