671B_0">部署deepseek-r1-671B
使用 4*A100 部署 deepseek-r1-671b-1.58bit 大模型。
环境
- ubuntu22.04LTS
- cuda 12.2.0
要求
- 内存: 256GB及以上
- 显存: 256GB及以上(160G可以跑起来,但对于长上下文容易oom),这里是A100 80G * 4
vllm
CUDA_VISIBLE_DEVICES=0,1,2,3 /data/miniconda3/envs/llm_py311-8/bin/python -m vllm.entrypoints.openai.api_server \
--port 8001 --served-model-name Qwen2-7B-Instruct \
--model /your/671B/model/path.gguf
失败,提示错误:
python3.11/site-packages/transformers/modeling_gguf_pytorch_utils.py", line 399, in load_gguf_checkpoint
raise ValueError(f"GGUF model with architecture {architecture} is not supported yet.")
ValueError: GGUF model with architecture deepseek2 is not supported yet.
github上面看到vllm暂不支持deepseek-r1-671B,有各种问题,暂时放弃
llama.cpp
准备阶段
cuda需要12.0及以上,我的cuda版本是12.2.0(用docker了),我在11.5上面编译失败(可能和GPU的驱动编译方式有关,没细研究),GPU 是 A100 * 4
模型下载参考:https://www.ollama.com/SIGJNF/deepseek-r1-671b-1.58bit
或者下载:https://hf-mirror.com/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_M
-
拉取容器
docker pull nvcr.io/nvidia/cuda:12.2.0-cudnn8-devel-ubuntu22.04 # 可能网络会有点问题 -
运行容器
docker run -it -d --name llama_cpp --gpus all \ -v /data/work/Star/.ollama/:/work/ollama/ \ -v /data/work/Star/llama.cpp:/work/llama.cpp/ \ -p 28000:8000 \ -p 27860:7860 \ -e TZ='Asia/Shanghai' \ nvcr.io/nvidia/cuda:12.2.0-cudnn8-devel-ubuntu22.04
-
进入容器
docker exec -it llama_cpp env LANG=C.UTF-8 /bin/bash
##拉取代码
git clone https://github.com/ggml-org/llama.cpp
编译GPU版本
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release -j16
运行服务
cd build/bin/
CUDA_VISIBLE_DEVICES=0,1,2,3 ./llama-server \
-m /path_to_model.gguf \
--port 7860 \
--cache-type-k q4_0 --threads 64 --prio 2 --temp 0.6 \
--ctx-size 8192 \
--seed 3407 \
--n-gpu-layers 1600
或者:
CUDA_VISIBLE_DEVICES=0,1,2,3 ./llama-server \
-m /path_to_model.gguf \
--port 7860 \
--host 0.0.0.0 \
-c 16384 \
-np 4 \
--n-gpu-layers 15000
更多参数配置参考:
https://github.com/ggml-org/llama.cpp/blob/master/examples/server/README.md
并发测试
5个并发:
curl --request POST --url http://localhost:17861/completion --header "Content-Type: application/json" --data '{"prompt": "Building a website can be done in 10 simple steps:","n_predict": 12}' &
curl --request POST --url http://localhost:17861/completion --header "Content-Type: application/json" --data '{"prompt": "Building a website can be done in 10 simple steps:","n_predict": 102}' &
curl --request POST --url http://localhost:17861/completion --header "Content-Type: application/json" --data '{"prompt": "Building a website can be done in 10 simple steps:","n_predict": 112}' &
curl --request POST --url http://localhost:17861/completion --header "Content-Type: application/json" --data '{"prompt": "Building a website can be done in 10 simple steps:","n_predict": 42}' &
curl --request POST --url http://localhost:17861/completion --header "Content-Type: application/json" --data '{"prompt": "Building a website can be done in 10 simple steps:","n_predict": 32}' &
这里测试了不同输入长度及上下文长度。
参考
- https://github.com/ggml-org/llama.cpp
- https://github.com/ggml-org/llama.cpp/blob/master/examples/server/README.md
- https://hf-mirror.com/unsloth/DeepSeek-R1-GGUF
- https://www.ollama.com/SIGJNF/deepseek-r1-671b-1.58bit