Skip to content

fix(testing/bench.py): add float16 and float8_e5m2 to dtype_to_str mapping#5

Open
pandacooming wants to merge 2 commits intodeepseek-ai:mainfrom
pandacooming:fix/dtype-to-str-missing-fp16-e5m2
Open

fix(testing/bench.py): add float16 and float8_e5m2 to dtype_to_str mapping#5
pandacooming wants to merge 2 commits intodeepseek-ai:mainfrom
pandacooming:fix/dtype-to-str-missing-fp16-e5m2

Conversation

@pandacooming
Copy link
Copy Markdown

Summary

Add the two missing dtype mappings to dtype_to_str() in tile_kernels/testing/bench.py:

  • torch.float16'fp16'
  • torch.float8_e5m2'e5m2'

Fixes #4

Changes

def dtype_to_str(dtype: torch.dtype) -> str:
    mapping = {
        torch.float32: 'fp32',
+       torch.float16: 'fp16',
        torch.bfloat16: 'bf16',
        torch.float8_e4m3fn: 'e4m3',
+       torch.float8_e5m2: 'e5m2',
        torch.int8: 'e2m1',
    }
    if dtype not in mapping:
-       raise ValueError(f'Unsupported dtype: {dtype}. Only fp32, bf16, e4m3, and int8(e2m1) are supported')
+       raise ValueError(f'Unsupported dtype: {dtype}. Only fp32, fp16, bf16, e4m3, e5m2, and int8(e2m1) are supported')
    return mapping[dtype]

Testing

All 6 supported dtypes pass and invalid dtype still raises ValueError:

torch.float16: ✅
torch.float32: ✅
torch.bfloat16: ✅
torch.float8_e4m3fn: ✅
torch.float8_e5m2: ✅
torch.int8: ✅
int32: ✅ raises ValueError

pandacooming added 2 commits April 24, 2026 00:53
- Add .github/workflows/format.yml: GitHub Actions workflow that runs
  yapf + ruff on every push to main and on every PR.
- Add format.sh: local formatting script (used by CI and for
  contributors to run locally before pushing).

Both files follow the same conventions as deepseek-ai/DeepEP.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

dtype_to_str: unsupported dtype raises ValueError for fp16/float8_e5m2

1 participant