Quantization Guide#

Model quantization is a technique that reduces model size and computational overhead by lowering the numerical precision of weights and activations, thereby saving memory and improving inference speed.

vLLM Ascend supports multiple quantization methods. This guide provides instructions for using different quantization tools and running quantized models on vLLM Ascend.

Note

You can choose to convert the model yourself or use the quantized model we uploaded. See https://www.modelscope.cn/models/vllm-ascend/Kimi-K2-Instruct-W8A8. Before you quantize a model, ensure sufficient RAM is available.

Quantization Tools#

vLLM Ascend supports models quantized by two main tools: ModelSlim and LLM-Compressor.

2. LLM-Compressor#

LLM-Compressor is a unified compressed model library for faster vLLM inference.

Installation#

pip install llmcompressor

Model Quantization#

LLM-Compressor provides various quantization scheme examples.

Dense Quantization#

An example to generate W8A8 dynamic quantized weights for dense model:

# Navigate to LLM-Compressor examples directory
cd examples/quantization/llm-compressor

# Run quantization script
python3 w8a8_int8_dynamic.py
MoE Quantization#

An example to generate W8A8 dynamic quantized weights for MoE model:

# Navigate to LLM-Compressor examples directory
cd examples/quantization/llm-compressor

# Run quantization script
python3 w8a8_int8_dynamic_moe.py

For more content, refer to the official examples.

Currently supported quantization types by LLM-Compressor: W8A8 and W8A8_DYNAMIC.

Running Quantized Models#

Once you have a quantized model which is generated by ModelSlim, you can use vLLM Ascend for inference by specifying the --quantization ascend parameter to enable quantization features, while for models quantized by LLM-Compressor, do not need to add this parameter.

Offline Inference#

import torch

from vllm import LLM, SamplingParams

prompts = [
    "Hello, my name is",
    "The future of AI is",
]
# Set sampling parameters
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=40)

llm = LLM(model="/path/to/your/quantized_model",
          max_model_len=4096,
          trust_remote_code=True,
          # Set appropriate TP and DP values
          tensor_parallel_size=2,
          data_parallel_size=1,
          # Set an unused port
          port=8000,
          # Set serving model name
          served_model_name="quantized_model",
          # Specify `quantization="ascend"` to enable quantization for models quantized by ModelSlim
          quantization="ascend")

outputs = llm.generate(prompts, sampling_params)
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

Online Inference#

# Corresponding to offline inference
python -m vllm.entrypoints.api_server \
    --model /path/to/your/quantized_model \
    --max-model-len 4096 \
    --port 8000 \
    --tensor-parallel-size 2 \
    --data-parallel-size 1 \
    --served-model-name quantized_model \
    --trust-remote-code \
    --quantization ascend

References#