Replies: 2 comments
-
|
This looks like a PyTorch error. Check your PyTorch Version
|
Beta Was this translation helpful? Give feedback.
-
|
This Root cause: Fixes: 1. Reinstall vLLM with matching PyTorch pip uninstall vllm
pip install vllm --no-cache-dir2. Check PyTorch version import torch
print(torch.__version__)
print(torch.version.cuda)vLLM needs specific CUDA + PyTorch combos. 3. Full clean install pip uninstall vllm torch
pip cache purge
pip install torch==2.2.0+cu121 -f https://download.pytorch.org/whl/torch_stable.html
pip install vllm4. Build from source (if weird setup) git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e .Common causes:
We've debugged these ABI issues at RevolutionAI many times. The clean reinstall usually fixes it. What's your |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
......lib/python3.12/site-packages/vllm/_C.abi3.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSsb
vllm serve. NOT RUN ,
Beta Was this translation helpful? Give feedback.
All reactions