Skip to content

Add Qwen3.5 model support via opt-in dependency extra#154

Closed
ricky-chaoju wants to merge 1 commit intovllm-project:mainfrom
ricky-chaoju:feat/qwen3.5-dependency-upgrade
Closed

Add Qwen3.5 model support via opt-in dependency extra#154
ricky-chaoju wants to merge 1 commit intovllm-project:mainfrom
ricky-chaoju:feat/qwen3.5-dependency-upgrade

Conversation

@ricky-chaoju
Copy link
Contributor

Summary

Enable Qwen3.5 (dense and MoE) model support by adding a [qwen35] optional dependency extra. Base dependencies unchanged — existing users unaffected.

Consolidates and supersedes #121, #123, #129 .

Why the original PRs' runtime fixes are unnecessary

Original PR fix Why not needed
#121 hybrid cache batched decode fallback mlx-lm ≥0.31.0 Qwen3.5 attention correctly handles BatchKVCache.offset as mx.array — batched decode works without fallback
#123 rope validation monkeypatch Qwen3.5 config has partial_rotary_factor=0.25, which triggers transformers' built-in set() coercion path
#129 mlx-lm model alias shim mlx-lm ≥0.31.0 natively includes qwen3_5 and qwen3_5_moe modules

Installation

# Qwen3.5 users                                                                                                                                                         
VLLM_VERSION=0.17.0 ./install.sh                                                                                                                                        
pip install 'vllm-metal[qwen35]' 

Signed-off-by: RickyChen / 陳昭儒 <ricky.chen@infinirc.com>
@ricky-chaoju ricky-chaoju force-pushed the feat/qwen3.5-dependency-upgrade branch from b0b35a7 to 8c07c3c Compare March 11, 2026 14:24
@ricky-chaoju ricky-chaoju marked this pull request as ready for review March 11, 2026 14:29
Copy link
Collaborator

@LxYuan0420 LxYuan0420 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for consolidating those PR. The compatibility analysis is useful and we'll reference it.

Closing this PR because:

  1. transformers>=5.0.0 in an optional extra isn't isolated — it upgrades the entire environment and breaks vLLM 0.14.x + all existing models. Transformers v5 may be fine but we need to verify carefully.
  2. vLLM upgrade is a project-wide migration, not a per-model opt-in. We're waiting on #134 (@ericcurtin's wheel install) to confirm the macOS path before bumping further.
  3. Qwen3.5 has ongoing MLX incompatibilities — cache is broken for hybrid architectures (mlx-lm#980, mlx-lm#903), server tool calls fail (mlx-lm#905). Not ready to claim support until upstream stabilizes.
  4. VLLM_VERSION env var is fragile — default install must always produce a working setup.
  5. No smoke tests for existing models or end-to-end tests for Qwen3.5.

Suggested approach: Break this into steps: (1) first land the baseline upgrade with smoke tests proving existing models still work, then (2) add Qwen3.5 support once upstream MLX issuesare resolved.

Happy to review a re-scoped version. 🙏

@LxYuan0420 LxYuan0420 closed this Mar 11, 2026
@ricky-chaoju ricky-chaoju deleted the feat/qwen3.5-dependency-upgrade branch March 17, 2026 07:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants