robbaa
315 天前
给你个参考:
环境:双 3090 + nvlink + docker
命令:ollama run llama3:70b --verbose
刚刚好可以塞下。
ollama-1 | ggml_cuda_init: found 2 CUDA devices:
ollama-1 | Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
ollama-1 | Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
ollama-1 | llm_load_tensors: ggml ctx size = 0.83 MiB
ollama-1 | llm_load_tensors: offloading 80 repeating layers to GPU
ollama-1 | llm_load_tensors: offloading non-repeating layers to GPU
ollama-1 | llm_load_tensors: offloaded 81/81 layers to GPU
ollama-1 | llm_load_tensors: CPU buffer size = 563.62 MiB
ollama-1 | llm_load_tensors: CUDA0 buffer size = 18821.56 MiB
ollama-1 | llm_load_tensors: CUDA1 buffer size = 18725.42 MiB
测试三次结果:
total duration: 25.820168178s
load duration: 1.436783ms
prompt eval count: 14 token(s)
prompt eval duration: 483.796ms
prompt eval rate: 28.94 tokens/s
eval count: 448 token(s)
eval duration: 25.203697s
eval rate: 17.78 tokens/s
total duration: 30.486672187s
load duration: 1.454596ms
prompt eval count: 479 token(s)
prompt eval duration: 2.025687s
prompt eval rate: 236.46 tokens/s
eval count: 496 token(s)
eval duration: 28.322837s
eval rate: 17.51 tokens/s
total duration: 21.176605423s
load duration: 2.629646ms
prompt eval count: 529 token(s)
prompt eval duration: 2.325535s
prompt eval rate: 227.47 tokens/s
eval count: 324 token(s)
eval duration: 18.622355s
eval rate: 17.40 tokens/s