Qwen3-235B-A22B#
Introduction#
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support.
This document will show the main verification steps of the model, including supported features, feature configuration, environment preparation, single-node and multi-node deployment, accuracy and performance evaluation.
The Qwen3-235B-A22B model is first supported in vllm-ascend:v0.8.4rc2.
Supported Features#
Refer to supported features to get the model’s supported feature matrix.
Refer to feature guide to get the feature’s configuration.
Environment Preparation#
Model Weight#
Qwen3-235B-A22B(BF16 version): require 1 Atlas 800 A3 (64G × 16) node, 1 Atlas 800 A2 (64G × 8) node or 2 Atlas 800 A2(32G * 8)nodes. Download model weightQwen3-235B-A22B-w8a8(Quantized version): require 1 Atlas 800 A3 (64G × 16) node or 1 Atlas 800 A2 (64G × 8) node or 2 Atlas 800 A2(32G * 8)nodes. Download model weight
It is recommended to download the model weight to the shared directory of multiple nodes, such as /root/.cache/.
Verify Multi-node Communication(Optional)#
If you want to deploy multi-node environment, you need to verify multi-node communication according to verify multi-node communication environment.
Installation#
For example, using images quay.io/ascend/vllm-ascend:v0.11.0rc2(for Atlas 800 A2) and quay.io/ascend/vllm-ascend:v0.11.0rc2-a3(for Atlas 800 A3).
Select an image based on your machine type and start the docker image on your node, refer to using docker.
# Update --device according to your device (Atlas A2: /dev/davinci[0-7] Atlas A3:/dev/davinci[0-15]).
# Update the vllm-ascend image according to your environment.
# Note you should download the weight to /root/.cache in advance.
# Update the vllm-ascend image
export IMAGE=m.daocloud.io/quay.io/ascend/vllm-ascend:v0.15.0rc1
export NAME=vllm-ascend
# Run the container using the defined variables
# Note: If you are running bridge network with docker, please expose available ports for multiple nodes communication in advance.
docker run --rm \
--name $NAME \
--net=host \
--shm-size=1g \
--device /dev/davinci0 \
--device /dev/davinci1 \
--device /dev/davinci2 \
--device /dev/davinci3 \
--device /dev/davinci4 \
--device /dev/davinci5 \
--device /dev/davinci6 \
--device /dev/davinci7 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-it $IMAGE bash
You can build all from source.
Install
vllm-ascend, refer to set up using python.
If you want to deploy multi-node environment, you need to set up environment on each node.
Deployment#
Single-node Deployment#
Qwen3-235B-A22B and Qwen3-235B-A22B-w8a8 can both be deployed on 1 Atlas 800 A3(64G16), 1 Atlas 800 A2(64G8).
Quantized version need to start with parameter --quantization ascend.
Run the following script to execute online 128k inference.
#!/bin/sh
# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=true
# To reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_BUFFSIZE=512
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=1
export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
export TASK_QUEUE_ENABLE=1
vllm serve vllm-ascend/Qwen3-235B-A22B-w8a8 \
--host 0.0.0.0 \
--port 8000 \
--tensor-parallel-size 8 \
--data-parallel-size 1 \
--seed 1024 \
--quantization ascend \
--served-model-name qwen3 \
--max-num-seqs 32 \
--max-model-len 133000 \
--max-num-batched-tokens 8096 \
--enable-expert-parallel \
--trust-remote-code \
--gpu-memory-utilization 0.95 \
--hf-overrides '{"rope_parameters": {"rope_type":"yarn","rope_theta":1000000,"factor":4,"original_max_position_embeddings":32768}}' \
--compilation-config '{"cudagraph_mode":"FULL_DECODE_ONLY"}' \
--async-scheduling
Notice:
Qwen3-235B-A22B originally only supports 40960 context(max_position_embeddings). If you want to use it and its related quantization weights to run long seqs (such as 128k context), it is required to use yarn rope-scaling technique.
For vLLM version same as or new than
v0.12.0, use parameter:--hf-overrides '{"rope_parameters": {"rope_type":"yarn","rope_theta":1000000,"factor":4,"original_max_position_embeddings":32768}}' \.For vllm version below
v0.12.0, use parameter:--rope_scaling '{"rope_type":"yarn","factor":4,"original_max_position_embeddings":32768}' \. If you are using weights like Qwen3-235B-A22B-Instruct-2507 which originally supports long contexts, there is no need to add this parameter.
The parameters are explained as follows:
--data-parallel-size1 and--tensor-parallel-size8 are common settings for data parallelism (DP) and tensor parallelism (TP) sizes.--max-model-lenrepresents the context length, which is the maximum value of the input plus output for a single request.--max-num-seqsindicates the maximum number of requests that each DP group is allowed to process. If the number of requests sent to the service exceeds this limit, the excess requests will remain in a waiting state and will not be scheduled. Note that the time spent in the waiting state is also counted in metrics such as TTFT and TPOT. Therefore, when testing performance, it is generally recommended that--max-num-seqs*--data-parallel-size>= the actual total concurrency.--max-num-batched-tokensrepresents the maximum number of tokens that the model can process in a single step. Currently, vLLM v1 scheduling enables ChunkPrefill/SplitFuse by default, which means:(1) If the input length of a request is greater than
--max-num-batched-tokens, it will be divided into multiple rounds of computation according to--max-num-batched-tokens;(2) Decode requests are prioritized for scheduling, and prefill requests are scheduled only if there is available capacity.
Generally, if
--max-num-batched-tokensis set to a larger value, the overall latency will be lower, but the pressure on GPU memory (activation value usage) will be greater.
--gpu-memory-utilizationrepresents the proportion of HBM that vLLM will use for actual inference. Its essential function is to calculate the available kv_cache size. During the warm-up phase (referred to as profile run in vLLM), vLLM records the peak GPU memory usage during an inference process with an input size of--max-num-batched-tokens. The available kv_cache size is then calculated as:--gpu-memory-utilization* HBM size - peak GPU memory usage. Therefore, the larger the value of--gpu-memory-utilization, the more kv_cache can be used. However, since the GPU memory usage during the warm-up phase may differ from that during actual inference (e.g., due to uneven EP load), setting--gpu-memory-utilizationtoo high may lead to OOM (Out of Memory) issues during actual inference. The default value is0.9.--enable-expert-parallelindicates that EP is enabled. Note that vLLM does not support a mixed approach of ETP and EP; that is, MoE can either use pure EP or pure TP.--no-enable-prefix-cachingindicates that prefix caching is disabled. To enable it, remove this option.--quantization“ascend” indicates that quantization is used. To disable quantization, remove this option.--compilation-configcontains configurations related to the aclgraph graph mode. The most significant configurations are “cudagraph_mode” and “cudagraph_capture_sizes”, which have the following meanings: “cudagraph_mode”: represents the specific graph mode. Currently, “PIECEWISE” and “FULL_DECODE_ONLY” are supported. The graph mode is mainly used to reduce the cost of operator dispatch. Currently, “FULL_DECODE_ONLY” is recommended.“cudagraph_capture_sizes”: represents different levels of graph modes. The default value is [1, 2, 4, 8, 16, 24, 32, 40,…,
--max-num-seqs]. In the graph mode, the input for graphs at different levels is fixed, and inputs between levels are automatically padded to the next level. Currently, the default setting is recommended. Only in some scenarios is it necessary to set this separately to achieve optimal performance.export VLLM_ASCEND_ENABLE_FLASHCOMM1=1indicates that Flashcomm1 optimization is enabled. Currently, this optimization is only supported for MoE in scenarios where tp_size > 1.
Multi-node Deployment with MP (Recommended)#
Assume you have Atlas 800 A3 (64G16) nodes (or 2 A2), and want to deploy the Qwen3-VL-235B-A22B-Instruct model across multiple nodes.
Node 0
#!/bin/sh
# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=true
# To reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
# this obtained through ifconfig
# nic_name is the network interface name corresponding to local_ip of the current node
nic_name="xxxx"
local_ip="xxxx"
export HCCL_IF_IP=$local_ip
export GLOO_SOCKET_IFNAME=$nic_name
export TP_SOCKET_IFNAME=$nic_name
export HCCL_SOCKET_IFNAME=$nic_name
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=1
export HCCL_BUFFSIZE=1024
export TASK_QUEUE_ENABLE=1
vllm serve vllm-ascend/Qwen3-235B-A22B \
--host 0.0.0.0 \
--port 8000 \
--data-parallel-size 2 \
--api-server-count 2 \
--data-parallel-size-local 1 \
--data-parallel-address $local_ip \
--data-parallel-rpc-port 13389 \
--seed 1024 \
--served-model-name qwen3 \
--tensor-parallel-size 8 \
--enable-expert-parallel \
--max-num-seqs 16 \
--max-model-len 32768 \
--max-num-batched-tokens 4096 \
--trust-remote-code \
--async-scheduling \
--gpu-memory-utilization 0.9 \
Node1
#!/bin/sh
# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=true
# To reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
# this obtained through ifconfig
# nic_name is the network interface name corresponding to local_ip of the current node
nic_name="xxxx"
local_ip="xxxx"
# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node)
node0_ip="xxxx"
export HCCL_IF_IP=$local_ip
export GLOO_SOCKET_IFNAME=$nic_name
export TP_SOCKET_IFNAME=$nic_name
export HCCL_SOCKET_IFNAME=$nic_name
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=1
export HCCL_BUFFSIZE=1024
export TASK_QUEUE_ENABLE=1
vllm serve vllm-ascend/Qwen3-235B-A22B \
--host 0.0.0.0 \
--port 8000 \
--headless \
--data-parallel-size 2 \
--data-parallel-size-local 1 \
--data-parallel-start-rank 1 \
--data-parallel-address $node0_ip \
--data-parallel-rpc-port 13389 \
--seed 1024 \
--tensor-parallel-size 8 \
--served-model-name qwen3 \
--max-num-seqs 16 \
--max-model-len 32768 \
--max-num-batched-tokens 4096 \
--enable-expert-parallel \
--trust-remote-code \
--async-scheduling \
--gpu-memory-utilization 0.9 \
If the service starts successfully, the following information will be displayed on node 0:
INFO: Started server process [44610]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Started server process [44611]
INFO: Waiting for application startup.
INFO: Application startup complete.
Multi-node Deployment with Ray#
refer to Ray Distributed (Qwen/Qwen3-235B-A22B).
Prefill-Decode Disaggregation#
Functional Verification#
Once your server is started, you can query the model with input prompts:
curl http://<node0_ip>:<port>/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "qwen3",
"prompt": "The future of AI is",
"max_completion_tokens": 50,
"temperature": 0
}'
Accuracy Evaluation#
Here are two accuracy evaluation methods.
Using AISBench#
Refer to Using AISBench for details.
After execution, you can get the result, here is the result of
Qwen3-235B-A22B-w8a8invllm-ascend:0.11.0rc0for reference only.
dataset |
version |
metric |
mode |
vllm-api-general-chat |
|---|---|---|---|---|
cevaldataset |
- |
accuracy |
gen |
91.16 |
Performance#
Using AISBench#
Refer to Using AISBench for performance evaluation for details.
Using vLLM Benchmark#
Run performance evaluation of Qwen3-235B-A22B-w8a8 as an example.
Refer to vllm benchmark for more details.
There are three vllm bench subcommands:
latency: Benchmark the latency of a single batch of requests.serve: Benchmark the online serving throughput.throughput: Benchmark offline inference throughput.
Take the serve as an example. Run the code as follows.
export VLLM_USE_MODELSCOPE=true
vllm bench serve --model vllm-ascend/Qwen3-235B-A22B-w8a8 --dataset-name random --random-input 200 --num-prompts 200 --request-rate 1 --save-result --result-dir ./
After about several minutes, you can get the performance evaluation result.
Reproducing Performance Results#
In this section, we provide simple scripts to re-produce our latest performance. It is also recommended to read instructions above to understand basic concepts or options in vLLM && vLLM-Ascend.
Environment#
vLLM v0.13.0
vLLM-Ascend v0.13.0rc1
CANN 8.3.RC2
torch_npu 2.8.0
HDK/driver 25.3.RC1
triton_ascend 3.2.0
Single Node A3 (64G*16)#
Example server scripts:
#!/bin/sh
# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=true
# To reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_BUFFSIZE=512
export HCCL_OP_EXPANSION_MODE="AIV"
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=1
export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
export VLLM_ASCEND_ENABLE_FUSED_MC2=1
export TASK_QUEUE_ENABLE=1
vllm serve vllm-ascend/Qwen3-235B-A22B-w8a8 \
--host 0.0.0.0 \
--port 8000 \
--tensor-parallel-size 4 \
--data-parallel-size 4 \
--seed 1024 \
--quantization ascend \
--served-model-name qwen3 \
--max-num-seqs 128 \
--max-model-len 40960 \
--max-num-batched-tokens 16384 \
--enable-expert-parallel \
--trust-remote-code \
--gpu-memory-utilization 0.9 \
--no-enable-prefix-caching \
--compilation-config '{"cudagraph_mode":"FULL_DECODE_ONLY"}' \
--async-scheduling
Benchmark scripts:
vllm bench serve --model qwen3 \
--tokenizer vllm-ascend/Qwen3-235B-A22B-w8a8 \
--ignore-eos \
--dataset-name random \
--random-input-len 3584 \
--random-output-len 1536 \
--num-prompts 800 \
--max-concurrency 160 \
--request-rate 24 \
--host 0.0.0.0 \
--port 8000 \
Reference test results:
num_requests |
concurrency |
mean TTFT(ms) |
mean TPOT(ms) |
output token throughput (tok/s) |
|---|---|---|---|---|
720 |
144 |
4717.45 |
48.69 |
2761.72 |
Note:
Setting
export VLLM_ASCEND_ENABLE_FUSED_MC2=1enables MoE fused operators that reduce time consumption of MoE in both prefill and decode. This is an experimental feature which only supports W8A8 quantization on Atlas A3 servers now. If you encounter any problems when using this feature, you can disable it by settingexport VLLM_ASCEND_ENABLE_FUSED_MC2=0and update issues in vLLM-Ascend community.Here we disable prefix cache because of random datasets. You can enable prefix cache if requests have long common prefix.
Three Node A3 – PD disaggregation#
On three Atlas 800 A3(64G*16) server, we recommend to use one node as one prefill instance and two nodes as one decode instance. Example server scripts: Prefill Node 1
#!/bin/sh
export HCCL_IF_IP=prefill_node_1_ip
# Set ifname according to your network setting
ifname=""
export GLOO_SOCKET_IFNAME=${ifname}
export TP_SOCKET_IFNAME=${ifname}
export HCCL_SOCKET_IFNAME=${ifname}
# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=true
# To reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_BUFFSIZE=512
export HCCL_OP_EXPANSION_MODE="AIV"
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=1
export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
export VLLM_ASCEND_ENABLE_FUSED_MC2=2
export TASK_QUEUE_ENABLE=1
export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH
vllm serve vllm-ascend/Qwen3-235B-A22B-w8a8 \
--host 0.0.0.0 \
--port 8000 \
--tensor-parallel-size 8 \
--data-parallel-size 2 \
--data-parallel-size-local 2 \
--data-parallel-start-rank 0 \
--data-parallel-address prefill_node_1_ip \
--data-parallel-rpc-port prefill_node_dp_port \
--seed 1024 \
--quantization ascend \
--served-model-name qwen3 \
--max-num-seqs 24 \
--max-model-len 40960 \
--max-num-batched-tokens 16384 \
--enable-expert-parallel \
--enforce-eager \
--trust-remote-code \
--gpu-memory-utilization 0.9 \
--enforce-eager \
--no-enable-prefix-caching \
--kv-transfer-config \
'{"kv_connector": "MooncakeConnectorV1",
"kv_role": "kv_producer",
"kv_port": "30000",
"engine_id": "0",
"kv_connector_extra_config": {
"use_ascend_direct": true,
"prefill": {
"dp_size": 2,
"tp_size": 8
},
"decode": {
"dp_size": 8,
"tp_size": 4
}
}
}'
Decode Node 1
#!/bin/sh
export HCCL_IF_IP=decode_node_1_ip
ifname=""
export GLOO_SOCKET_IFNAME=${ifname}
export TP_SOCKET_IFNAME=${ifname}
export HCCL_SOCKET_IFNAME=${ifname}
# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=true
# To reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_BUFFSIZE=1024
export HCCL_OP_EXPANSION_MODE="AIV"
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=1
export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
export VLLM_ASCEND_ENABLE_FUSED_MC2=2
export TASK_QUEUE_ENABLE=1
export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH
vllm serve vllm-ascend/Qwen3-235B-A22B-w8a8 \
--host 0.0.0.0 \
--port 8000 \
--tensor-parallel-size 4 \
--data-parallel-size 8 \
--data-parallel-size-local 4 \
--data-parallel-start-rank 0 \
--data-parallel-address decode_node_1_ip \
--data-parallel-rpc-port decode_node_dp_port \
--seed 1024 \
--quantization ascend \
--served-model-name qwen3 \
--max-num-seqs 128 \
--max-model-len 40960 \
--max-num-batched-tokens 256 \
--enable-expert-parallel \
--trust-remote-code \
--gpu-memory-utilization 0.9 \
--compilation-config '{"cudagraph_mode":"FULL_DECODE_ONLY"}' \
--async-scheduling \
--no-enable-prefix-caching \
--kv-transfer-config \
'{"kv_connector": "MooncakeConnectorV1",
"kv_role": "kv_consumer",
"kv_port": "30100",
"engine_id": "1",
"kv_connector_extra_config": {
"use_ascend_direct": true,
"prefill": {
"dp_size": 2,
"tp_size": 8
},
"decode": {
"dp_size": 8,
"tp_size": 4
}
}
}'
Decode Node 2
#!/bin/sh
export HCCL_IF_IP=decode_node_2_ip
ifname=""
export GLOO_SOCKET_IFNAME=${ifname}
export TP_SOCKET_IFNAME=${ifname}
export HCCL_SOCKET_IFNAME=${ifname}
# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=true
# To reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_BUFFSIZE=1024
export HCCL_OP_EXPANSION_MODE="AIV"
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=1
export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
export VLLM_ASCEND_ENABLE_FUSED_MC2=2
export TASK_QUEUE_ENABLE=1
export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH
vllm serve vllm-ascend/Qwen3-235B-A22B-w8a8 \
--host 0.0.0.0 \
--port 8000 \
--headless \
--tensor-parallel-size 4 \
--data-parallel-size 8 \
--data-parallel-size-local 4 \
--data-parallel-start-rank 4 \
--data-parallel-address decode_node_1_ip \
--data-parallel-rpc-port decode_node_dp_port \
--seed 1024 \
--quantization ascend \
--served-model-name qwen3 \
--max-num-seqs 128 \
--max-model-len 40960 \
--max-num-batched-tokens 256 \
--enable-expert-parallel \
--trust-remote-code \
--gpu-memory-utilization 0.9 \
--compilation-config '{"cudagraph_mode":"FULL_DECODE_ONLY"}' \
--async-scheduling \
--no-enable-prefix-caching \
--kv-transfer-config \
'{"kv_connector": "MooncakeConnectorV1",
"kv_role": "kv_consumer",
"kv_port": "30100",
"engine_id": "1",
"kv_connector_extra_config": {
"use_ascend_direct": true,
"prefill": {
"dp_size": 2,
"tp_size": 8
},
"decode": {
"dp_size": 8,
"tp_size": 4
}
}
}'
PD proxy:
python load_balance_proxy_server_example.py --port 12347 --prefiller-hosts prefill_node_1_ip --prefiller-port 8000 --decoder-hosts decode_node_1_ip --decoder-ports 8000
Benchmark scripts:
vllm bench serve --model qwen3 \
--tokenizer vllm-ascend/Qwen3-235B-A22B-w8a8 \
--ignore-eos \
--dataset-name random \
--random-input-len 3584 \
--random-output-len 1536 \
--num-prompts 2880 \
--max-concurrency 576 \
--request-rate 8 \
--host 0.0.0.0 \
--port 12347 \
Reference test results:
num_requests |
concurrency |
mean TTFT(ms) |
mean TPOT(ms) |
output token throughput (tok/s) |
|---|---|---|---|---|
2880 |
576 |
3735.98 |
52.07 |
8593.44 |
Note:
We recommend to set
export VLLM_ASCEND_ENABLE_FUSED_MC2=2on this scenario (typically EP32 for Qwen3-235B). This enables a different MoE fusion operator.