You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
Checklist
Describe the bug
26B的模型使用vllm推理的时候在某些图片上会报这个错
Reproduction
CUDA_VISIBLE_DEVICES=1,2 python -m vllm.entrypoints.openai.api_server --model /home/maasuser/InternVL2-26B --port 6781 --trust-remote-code --max_num_seqs 16 --max-model-len 4096 --chat-template /root/tmp/models/nvidia-ADPTV_InternVL2-8B/template/template.jinja --tensor-parallel-size 2 --gpu-memory-utilization
Environment
Error traceback
No response
The text was updated successfully, but these errors were encountered: