-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add --output-warmup-metrics flag to cpu userbenchmark scripts #2604
Add --output-warmup-metrics flag to cpu userbenchmark scripts #2604
Conversation
Adds a new `--output-warmup-metrics` flag which adds warmup metrics to benchmark result JSON files. This allows us to analyse warmup iterations and decide how many are enough.
An example with and without the new flag: Without Example command:
Output tree:
With Example command:
Output tree:
|
cc: @FindHao Thanks in advance! |
@@ -42,13 +44,10 @@ def maybe_synchronize(device: str): | |||
|
|||
def get_latencies( | |||
func, device: str, nwarmup=WARMUP_ROUNDS, num_iter=BENCHMARK_ITERS | |||
) -> List[float]: | |||
) -> Tuple[List[float], List[float]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm concerned that this PR introduces too significant a change to the core APIs, not only for this line. As an alternative, can you consider adding an option to skip the warmup phase and use the actual run results as the 'warmup' results?
I've decided to drop this change in favour of using Thanks @FindHao. |
Adds a new
--output-warmup-metrics
flag which adds warmup metrics to benchmark result JSON files. This allows us to analyse warmup iterations and decide how many are enough.