tests: add MoE subgraph tests with require_full_compilation enforcement#4179
tests: add MoE subgraph tests with require_full_compilation enforcement#4179yizhuoz004 wants to merge 1 commit intopytorch:mainfrom
Conversation
|
Hi @yizhuoz004! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
Add tests/py/dynamo/hlo/test_moe.py covering all 7 MoE routing and dispatch variants from popular open-source LLMs (Mixtral, Qwen2, Qwen3, Llama4, DeepSeek-V2, DeepSeek-V3/R1, Nemotron-H). 50 parameterized test cases verify TRT numerical correctness against PyTorch reference. Extend DispatchTestCase.run_test() with require_full_compilation=True support: calls TRTInterpreter.validate_conversion() before building the engine and fails immediately if any op lacks a TRT converter. All MoE tests pass this check, confirming zero PyTorch fallback for every routing/dispatch/expert-MLP combination tested. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
ede1e15 to
6a18a12
Compare
Description
Add tests/py/dynamo/hlo/test_moe.py covering all 7 MoE routing and dispatch variants from popular open-source LLMs (Mixtral, Qwen2, Qwen3, Llama4, DeepSeek-V2, DeepSeek-V3/R1, Nemotron-H). 50 parameterized test cases verify TRT numerical correctness against PyTorch reference.
Extend DispatchTestCase.run_test() with require_full_compilation=True support: calls TRTInterpreter.validate_conversion() before building the engine and fails immediately if any op lacks a TRT converter. All MoE tests pass this check, confirming zero PyTorch fallback for every routing/dispatch/expert-MLP combination tested.
Fixes # (issue)
Type of change
Tests
Checklist: