Qwen3-VL-235B-A22B-Instruct#

Introduction#

The Qwen-VL(Vision-Language)series from Alibaba Cloud comprises a family of powerful Large Vision-Language Models (LVLMs) designed for comprehensive multimodal understanding. They accept images, text, and bounding boxes as input, and output text and detection boxes, enabling advanced functions like image detection, multi-modal dialogue, and multi-image reasoning.

This document will show the main verification steps of the model, including supported features, feature configuration, environment preparation, NPU deployment, accuracy and performance evaluation.

This tutorial uses the vLLM-Ascend v0.11.0rc2 version for demonstration, showcasing the Qwen3-VL-235B-A22B-Instruct model as an example for multi-NPU deployment.

Supported Features#

Refer to supported features to get the model’s supported feature matrix.

Refer to feature guide to get the feature’s configuration.

Environment Preparation#

Model Weight#

  • Qwen3-VL-235B-A22B-Instruct(BF16 version): require 1 Atlas 800 A3 (64G × 16) node,2 Atlas 800 A2(64G * 8)nodes. Download model weight

It is recommended to download the model weight to the shared directory of multiple nodes, such as /root/.cache/

Verify Multi-node Communication(Optional)#

If you want to deploy multi-node environment, you need to verify multi-node communication according to verify multi-node communication environment.

Installation#

For example, using images quay.io/ascend/vllm-ascend:v0.11.0rc2(for Atlas 800 A2) and quay.io/ascend/vllm-ascend:v0.11.0rc2-a3(for Atlas 800 A3).

Select an image based on your machine type and start the docker image on your node, refer to using docker.

  # Update --device according to your device (Atlas A2: /dev/davinci[0-7] Atlas A3:/dev/davinci[0-15]).
  # Update the vllm-ascend image according to your environment.
  # Note you should download the weight to /root/.cache in advance.
  # Update the vllm-ascend image
  export IMAGE=m.daocloud.io/quay.io/ascend/vllm-ascend:v0.15.0rc1
  export NAME=vllm-ascend

  # Run the container using the defined variables
  # Note: If you are running bridge network with docker, please expose available ports for multiple nodes communication in advance
  docker run --rm \
  --name $NAME \
  --net=host \
  --privileged=true \
  --shm-size=500g \
  --device /dev/davinci0 \
  --device /dev/davinci1 \
  --device /dev/davinci2 \
  --device /dev/davinci3 \
  --device /dev/davinci4 \
  --device /dev/davinci5 \
  --device /dev/davinci6 \
  --device /dev/davinci7 \
  --device /dev/davinci_manager \
  --device /dev/devmm_svm \
  --device /dev/hisi_hdc \
  -v /usr/local/dcmi:/usr/local/dcmi \
  -v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \
  -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
  -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
  -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
  -v /etc/ascend_install.info:/etc/ascend_install.info \
  -it $IMAGE bash

You can build all from source.

If you want to deploy multi-node environment, you need to set up environment on each node.

Deployment#

Multi-node Deployment with Ray#

Prefill-Decode Disaggregation#

Functional Verification#

Once your server is started, you can query the model with input prompts:

curl http://<node0_ip>:<port>/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
    "model": "qwen3",
    "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": [
        {"type": "image_url", "image_url": {"url": "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/qwen.png"}},
        {"type": "text", "text": "What is the text in the illustrate?"}
    ]}
    ]
    }'

Accuracy Evaluation#

Here are two accuracy evaluation methods.

Using AISBench#

  1. Refer to Using AISBench for details.

  2. After execution, you can get the result, here is the result of Qwen3-VL-235B-A22B-Instruct in vllm-ascend:0.11.0rc2 for reference only.

dataset

version

metric

mode

vllm-api-general-chat

aime2024

-

accuracy

gen

93

Performance#

Using AISBench#

Refer to Using AISBench for performance evaluation for details.

Using vLLM Benchmark#

Run performance evaluation of Qwen3-VL-235B-A22B-Instruct as an example.

Refer to vllm benchmark for more details.

There are three vllm bench subcommands:

  • latency: Benchmark the latency of a single batch of requests.

  • serve: Benchmark the online serving throughput.

  • throughput: Benchmark offline inference throughput.

Take the serve as an example. Run the code as follows.

export VLLM_USE_MODELSCOPE=true
vllm bench serve --model Qwen/Qwen3-VL-235B-A22B-Instruct  --dataset-name random --random-input 200 --num-prompts 200 --request-rate 1 --save-result --result-dir ./

After about several minutes, you can get the performance evaluation result.