Long-Sequence Context Parallel (Qwen3-235B-A22B)#
Getting Started#
vLLM-Ascend now supports long-sequence context parallel. This guide takes one-by-one steps to verify these features with constrained resources.
Using the Qwen3-235B-A22B-w8a8 (Quantized version) model as an example, use 1 Atlas 800 A3 (64G × 16) server to deploy the single node “pd co-locate” architecture.
Environment Preparation#
Model Weight#
Qwen3-235B-A22B-w8a8(Quantized version): requires 1 Atlas 800 A3 (64G × 16) node. Download model weight
It is recommended to download the model weight to the shared directory of multiple nodes, such as /root/.cache/
Run with Docker#
Start a Docker container on each node.
# Update the vllm-ascend image
export IMAGE=m.daocloud.io/quay.io/ascend/vllm-ascend:v0.15.0rc1
export NAME=vllm-ascend
# Run the container using the defined variables
# Note: If you are running bridge network with Docker, please expose available ports for multiple nodes communication in advance
docker run --rm \
--name $NAME \
--net=host \
--shm-size=1g \
--device /dev/davinci0 \
--device /dev/davinci1 \
--device /dev/davinci2 \
--device /dev/davinci3 \
--device /dev/davinci4 \
--device /dev/davinci5 \
--device /dev/davinci6 \
--device /dev/davinci7 \
--device /dev/davinci8 \
--device /dev/davinci9 \
--device /dev/davinci10 \
--device /dev/davinci11 \
--device /dev/davinci12 \
--device /dev/davinci13 \
--device /dev/davinci14 \
--device /dev/davinci15 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /etc/hccn.conf:/etc/hccn.conf \
-v /mnt/sfs_turbo/.cache:/root/.cache \
-it $IMAGE bash
Deployment#
Single-node Deployment#
Qwen3-235B-A22B-w8a8 can be deployed on 1 Atlas 800 A3(64G*16).
Quantized version needs to start with parameter --quantization ascend.
Run the following script to execute online 128k inference.
#!/bin/sh
# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=true
# To reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_BUFFSIZE=512
export HCCL_OP_EXPANSION_MODE="AIV"
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=1
export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
export TASK_QUEUE_ENABLE=1
export VLLM_ALLOW_LONG_MAX_MODEL_LEN=1
vllm serve vllm-ascend/Qwen3-235B-A22B-w8a8 \
--host 0.0.0.0 \
--port 8000 \
--tensor-parallel-size 8 \
--prefill-context-parallel-size 2 \
--decode-context-parallel-size 2 \
--seed 1024 \
--quantization ascend \
--served-model-name qwen3 \
--max-num-seqs 1 \
--max-model-len 133008 \
--max-num-batched-tokens 133008 \
--enable-expert-parallel \
--trust-remote-code \
--gpu-memory-utilization 0.95 \
--hf-overrides '{"rope_parameters": {"rope_type":"yarn","rope_theta":1000000,"factor":4,"original_max_position_embeddings":32768}}' \
--compilation-config '{"cudagraph_mode":"FULL_DECODE_ONLY", "cudagraph_capture_sizes":[1,2,4,8]}' \
--async-scheduling
Notice:
for vllm version below
v0.12.0use parameter:--rope_scaling '{"rope_type":"yarn","factor":4,"original_max_position_embeddings":32768}' \for vllm version
v0.12.0use parameter:--hf-overrides '{"rope_parameters": {"rope_type":"yarn","rope_theta":1000000,"factor":4,"original_max_position_embeddings":32768}}' \
The parameters are explained as follows:
--tensor-parallel-size8 are common settings for tensor parallelism (TP) sizes.--prefill-context-parallel-size2 are common settings for prefill context parallelism (PCP) sizes.--decode-context-parallel-size2 are common settings for decode context parallelism (DCP) sizes.--max-model-lenrepresents the context length, which is the maximum value of the input plus output for a single request.--max-num-seqsindicates the maximum number of requests that each DP group is allowed to process. If the number of requests sent to the service exceeds this limit, the excess requests will remain in a waiting state and will not be scheduled. Note that the time spent in the waiting state is also counted in metrics such as TTFT and TPOT. Therefore, when testing performance, it is generally recommended that--max-num-seqs*--data-parallel-size>= the actual total concurrency.--max-num-batched-tokensrepresents the maximum number of tokens that the model can process in a single step. Currently, vLLM v1 scheduling enables ChunkPrefill/SplitFuse by default, which means:(1) If the input length of a request is greater than
--max-num-batched-tokens, it will be divided into multiple rounds of computation according to--max-num-batched-tokens;(2) Decode requests are prioritized for scheduling, and prefill requests are scheduled only if there is available capacity.
Generally, if
--max-num-batched-tokensis set to a larger value, the overall latency will be lower, but the pressure on GPU memory (activation value usage) will be greater.
--gpu-memory-utilizationrepresents the proportion of HBM that vLLM will use for actual inference. Its essential function is to calculate the available kv_cache size. During the warm-up phase (referred to as profile run in vLLM), vLLM records the peak GPU memory usage during an inference process with an input size of--max-num-batched-tokens. The available kv_cache size is then calculated as:--gpu-memory-utilization* HBM size - peak GPU memory usage. Therefore, the larger the value of--gpu-memory-utilization, the more kv_cache can be used. However, since the GPU memory usage during the warm-up phase may differ from that during actual inference (e.g., due to uneven EP load), setting--gpu-memory-utilizationtoo high may lead to OOM (Out of Memory) issues during actual inference. The default value is0.9.--enable-expert-parallelindicates that EP is enabled. Note that vLLM does not support a mixed approach of ETP and EP; that is, MoE can either use pure EP or pure TP.--no-enable-prefix-cachingindicates that prefix caching is disabled. To enable it, remove this option.--quantization“ascend” indicates that quantization is used. To disable quantization, remove this option.--compilation-configcontains configurations related to the aclgraph graph mode. The most significant configurations are “cudagraph_mode” and “cudagraph_capture_sizes”, which have the following meanings: “cudagraph_mode”: represents the specific graph mode. Currently, “PIECEWISE” and “FULL_DECODE_ONLY” are supported. The graph mode is mainly used to reduce the cost of operator dispatch. Currently, “FULL_DECODE_ONLY” is recommended.“cudagraph_capture_sizes”: represents different levels of graph modes. The default value is [1, 2, 4, 8, 16, 24, 32, 40,…,
--max-num-seqs]. In the graph mode, the input for graphs at different levels is fixed, and inputs between levels are automatically padded to the next level. Currently, the default setting is recommended. Only in some scenarios is it necessary to set this separately to achieve optimal performance.export VLLM_ASCEND_ENABLE_FLASHCOMM1=1indicates that Flashcomm1 optimization is enabled. Currently, this optimization is only supported for MoE in scenarios where tp_size > 1.
Notice:
tp_size needs to be divisible by dcp_size
decode context parallel size must be less than or equal to max_dcp_size, where max_dcp_size = tensor_parallel_size // total_num_kv_heads.
Accuracy Evaluation#
Here are two accuracy evaluation methods.
Using AISBench#
Refer to Using AISBench for details.
After execution, you can get the result, here is the result of
Qwen3-235B-A22B-w8a8for reference only.
dataset |
version |
metric |
mode |
vllm-api-general-chat |
|---|---|---|---|---|
aime2024 |
- |
accuracy |
gen |
83.33 |
Performance#
Using AISBench#
Refer to Using AISBench for performance evaluation for details.
Using vLLM Benchmark#
Run performance evaluation of Qwen3-235B-A22B-w8a8 as an example.
Refer to vllm benchmark for more details.
There are three vllm bench subcommands:
latency: Benchmark the latency of a single batch of requests.serve: Benchmark the online serving throughput.throughput: Benchmark offline inference throughput.
Take the serve as an example. Run the code as follows.
export VLLM_USE_MODELSCOPE=true
vllm bench serve --model vllm-ascend/Qwen3-235B-A22B-w8a8 --dataset-name random --random-input 131072 --num-prompts 1 --request-rate 1 --save-result --result-dir ./
After about several minutes, you can get the performance evaluation result.
dataset |
version |
metric |
mode |
ttft |
|---|---|---|---|---|
random |
- |
performance |
perf |
17.36s |