[2091][performance] Track throughput metrics#2124
Open
florianscheidl wants to merge 67 commits intoecmwf:developfrom
Open
[2091][performance] Track throughput metrics#2124florianscheidl wants to merge 67 commits intoecmwf:developfrom
florianscheidl wants to merge 67 commits intoecmwf:developfrom
Conversation
…mance-metric-profiling
ekouts
suggested changes
Apr 14, 2026
| @pytest.fixture() | ||
| def tracker(): | ||
| """A tracker with warmup_steps=2 on CPU.""" | ||
| return ThroughputTracker(device=torch.device("cpu"), world_size=1, warmup_steps=2) |
Contributor
There was a problem hiding this comment.
Suggested change
| return ThroughputTracker(device=torch.device("cpu"), world_size=1, warmup_steps=2) | |
| return ThroughputTracker(device=torch.device("cpu"), warmup_steps=2) |
The signature of this is wrong, right? world_size doesn't exist in the __init__ of the ThroughputTracker class
|
|
||
| def test_warmup_steps_not_counted(): | ||
| """Steps during warmup do not contribute to totals.""" | ||
| tracker = ThroughputTracker(device=torch.device("cpu"), world_size=1, warmup_steps=3) |
Contributor
There was a problem hiding this comment.
Same comment as the fixture
ekouts
suggested changes
Apr 14, 2026
| fresh each step via ``compute_source_bytes`` as batch sizes | ||
| can vary across samples. | ||
| """ | ||
| torch.cuda.synchronize() |
Contributor
There was a problem hiding this comment.
Do we need to synchronize in every step or should we skip the warm up?
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Implements optional on-the-fly throughput metrics, logged per training step. The new metrics are named "performance.throughput.*" and track per-device and global throughput in terms of:
In multi-device and multi-node setups, we make an all-reduce call to get the global throughput metrics.
Usage
To activate throughput tracking, add
track_performance_metrics: Truein the training config, undertrain_logging, see the performance_*.yaml configs added here. Run with a base configuration, e.g.:Issue Number
Closes #2091.
Preview:
We investigated the effect of batch sizes on throughput, see https://gitlab.jsc.fz-juelich.de/hedgedoc/SUW6Zq-BR3uYCwU3hmIb6w?both#.
Below are screenshots from MLFlow:
Checklist before asking for review
./scripts/actions.sh lint./scripts/actions.sh unit-test./scripts/actions.sh integration-testlaunch-slurm.py --time 60