Skip to content

Tune MiniMax MI355X vLLM scheduling thresholds#1276

Merged
chunfangamd merged 10 commits intomainfrom
minimax-mi355x-scheduler-thresholds
May 6, 2026
Merged

Tune MiniMax MI355X vLLM scheduling thresholds#1276
chunfangamd merged 10 commits intomainfrom
minimax-mi355x-scheduler-thresholds

Conversation

@jiacao-amd
Copy link
Copy Markdown
Collaborator

@jiacao-amd jiacao-amd commented May 4, 2026

Summary

Tune the MiniMax-M2.5 FP8 MI355X vLLM launch policy for better throughput and stability across the 1k/1k and 8k/1k sweep points.

  • Default initialized path remains block-size=32, shuffled KV cache disabled, async scheduling enabled.
  • 1k/1k TP8/EP8: keep block-size=32 and shuffled KV cache disabled, and disable async scheduling.
  • 1k/1k non-TP8/EP8: use block-size=16 with shuffled KV cache; disable async scheduling through c128.
  • 8k/1k TP8/EP8: keep block-size=32, shuffled KV cache disabled, disable AITER MoE with VLLM_ROCM_USE_AITER_MOE=0, and disable async scheduling.
  • 8k/1k non-TP8/EP8: disable async scheduling through c64; use shuffled KV cache with block-size=16 at c64 and above.

Throughput Comparison

Metric: tput_per_gpu only.

ISL/OSL TP/EP Conc Baseline Validation Delta
1k/1k 2/2 2 175.6 196.7 +12.1%
1k/1k 2/2 4 316.6 351.2 +10.9%
1k/1k 2/2 8 551.1 558.8 +1.4%
1k/1k 2/2 16 912.4 918.0 +0.6%
1k/1k 2/2 32 1512.4 1615.8 +6.8%
1k/1k 2/2 64 2283.1 2377.0 +4.1%
1k/1k 2/2 128 3745.8 3844.7 +2.6%
1k/1k 2/2 256 5459.7 5787.7 +6.0%
1k/1k 2/2 512 8080.0 8346.7 +3.3%
1k/1k 4/4 4 173.0 197.0 +13.9%
1k/1k 4/4 8 329.8 357.2 +8.3%
1k/1k 4/4 16 554.2 601.6 +8.6%
1k/1k 4/4 32 976.6 1045.3 +7.0%
1k/1k 4/4 64 1574.5 1672.4 +6.2%
1k/1k 4/4 128 2620.1 2717.2 +3.7%
1k/1k 4/4 256 3846.2 3971.7 +3.3%
1k/1k 8/8 2 47.7 55.9 +17.2%
8k/1k 2/2 2 712.6 820.5 +15.1%
8k/1k 2/2 4 1320.6 1431.6 +8.4%
8k/1k 2/2 8 2162.1 2183.4 +1.0%
8k/1k 2/2 16 3378.7 3513.5 +4.0%
8k/1k 2/2 32 4645.2 5070.8 +9.2%
8k/1k 2/2 64 6495.4 6752.2 +4.0%
8k/1k 2/2 128 8601.5 8852.3 +2.9%
8k/1k 2/2 256 10391.0 10195.3 -1.9%
8k/1k 4/4 4 730.2 807.3 +10.6%
8k/1k 4/4 8 1291.4 1384.5 +7.2%
8k/1k 4/4 16 2076.3 2241.0 +7.9%
8k/1k 4/4 32 3314.3 3523.6 +6.3%
8k/1k 4/4 64 4741.3 5088.2 +7.3%
8k/1k 4/4 128 6719.6 6885.5 +2.5%
8k/1k 4/4 256 8156.8 8391.9 +2.9%
8k/1k 4/4 512 9136.0 9861.0 +7.9%
8k/1k 8/8 2 failed 199.9 newly passing

Testing

  • bash -n benchmarks/single_node/minimaxm2.5_fp8_mi355x.sh
  • git diff --check
  • Compared results_bmk artifacts from the validation and baseline runs above.

@jiacao-amd jiacao-amd requested a review from a team May 4, 2026 20:26
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 4, 2026

Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook

If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you

PR authors are responsible for ensuring that after merging, all GitHub Action jobs fully pass. A lot of the time, failures are just flakes and simply re-running the failed jobs will fix it. If re-running failed jobs is attempted, PR authors are responsible for ensuring it passes. See GitHub's docs on re-running failed jobs: https://docs.github.com/en/actions/how-tos/manage-workflow-runs/re-run-workflows-and-jobs#re-running-failed-jobs-in-a-workflow

As a rule of thumb, generally, PR authors should request a review & get a PR approval from the respective companies' CODEOWNERS before requesting a review from core maintainers.

If additional help is needed, PR authors can reach out to core maintainers over Slack.

@jiacao-amd jiacao-amd force-pushed the minimax-mi355x-scheduler-thresholds branch from 17bc2cc to c2b7d37 Compare May 4, 2026 20:30
Comment thread benchmarks/single_node/minimaxm2.5_fp8_mi355x.sh Outdated
@jiacao-amd jiacao-amd force-pushed the minimax-mi355x-scheduler-thresholds branch from c2b7d37 to 98bc84c Compare May 4, 2026 20:35
@SemiAnalysisAI SemiAnalysisAI deleted a comment from github-actions Bot May 4, 2026
@SemiAnalysisAI SemiAnalysisAI deleted a comment from github-actions Bot May 4, 2026
@jiacao-amd jiacao-amd force-pushed the minimax-mi355x-scheduler-thresholds branch from 98bc84c to a9a3cef Compare May 4, 2026 21:42
@SemiAnalysisAI SemiAnalysisAI deleted a comment from github-actions Bot May 4, 2026
@jiacao-amd
Copy link
Copy Markdown
Collaborator Author

/sweep test-config --config-files .github/configs/amd-master.yaml --config-keys minimaxm2.5-fp8-mi355x-vllm

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 4, 2026

@jiacao-amd Kicking off a sweep.

Run: https://github.com/SemiAnalysisAI/InferenceX/actions/runs/25346292897
Command: test-config --config-files .github/configs/amd-master.yaml --config-keys minimaxm2.5-fp8-mi355x-vllm
Pinned ref: a9a3cef
Approval: not required (trusted collaborator).

Copy link
Copy Markdown
Collaborator

@chunfangamd chunfangamd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a data-driven tuning change, and the concept is quite promising.

Please fix the CI failure and rewrite the logic slightly to improve understanding. Thanks @jiacao-amd for the work!

Comment thread benchmarks/single_node/minimaxm2.5_fp8_mi355x.sh Outdated
Copy link
Copy Markdown
Collaborator

@chunfangamd chunfangamd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks @jiacao-amd!

@chunfangamd
Copy link
Copy Markdown
Collaborator

/sweep test-config --config-files .github/configs/amd-master.yaml --config-keys minimaxm2.5-fp8-mi355x-vllm

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 5, 2026

@chunfangamd Kicking off a sweep.

Run: https://github.com/SemiAnalysisAI/InferenceX/actions/runs/25395615583
Command: test-config --config-files .github/configs/amd-master.yaml --config-keys minimaxm2.5-fp8-mi355x-vllm
Pinned ref: 807bf64
Approval: not required (trusted collaborator).

@jiacao-amd
Copy link
Copy Markdown
Collaborator Author

/sweep test-config --config-files .github/configs/amd-master.yaml --config-keys minimaxm2.5-fp8-mi355x-vllm

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 6, 2026

@jiacao-amd Kicking off a sweep.

Run: https://github.com/SemiAnalysisAI/InferenceX/actions/runs/25449459268
Command: test-config --config-files .github/configs/amd-master.yaml --config-keys minimaxm2.5-fp8-mi355x-vllm
Pinned ref: 8bbdc81
Approval: not required (trusted collaborator).

@chunfangamd chunfangamd merged commit 44a0484 into main May 6, 2026
3 checks passed
@chunfangamd chunfangamd deleted the minimax-mi355x-scheduler-thresholds branch May 6, 2026 18:23
chunfangamd added a commit that referenced this pull request May 6, 2026
#1293)

PR #1276 ("Tune MiniMax MI355X vLLM scheduling thresholds") landed without
a perf-changelog entry — the prepared entry was dropped in commit 8d8b1e0
("Remove MiniMax perf changelog entry") before merge, so the tuned recipe
never re-ran on push-to-main and the dashboard still reflects the old
launch policy.

Re-add the entry so a sweep is triggered for the new policy and the change
is documented chronologically. The entry references the original PR #1276,
matching the convention used for prior changelog re-appends (e.g. #1269).

Co-authored-by: Cursor <cursoragent@cursor.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Development

Successfully merging this pull request may close these issues.

3 participants