Fix Rust benchmarking#223
Conversation
Pull request google#171 introduced a problem with benchmarking code rendering it useless. This commit traverses the dataset directory recursively and thus again introducec meaningful benchmarking. Also it makes sure that we traverse the expected number of shards. This commit includes also test and holdout since when we have that data we might use it. This will introduce a regression compared to the first benchmarks and definitely one compared to the empty benchmarks.
There was a problem hiding this comment.
Summary of Changes
Hello @wsxrdv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a critical issue in the Rust benchmarking setup, which was previously rendered ineffective due to an incorrect dataset traversal mechanism. It re-enables meaningful benchmarking by implementing recursive directory traversal for dataset files and ensures that all expected data shards, including test and holdout splits, are correctly processed. This change aims to provide a more comprehensive and accurate representation of performance, though it may lead to a regression in benchmark results compared to previous, less complete runs.
Highlights
- Benchmarking Fix: Rectified a broken Rust benchmarking setup by enabling recursive traversal of dataset directories, ensuring all relevant data files are included.
- Comprehensive Data Inclusion: Expanded benchmarking scope to include test and holdout data splits, providing a more complete performance evaluation.
- Shard Count Verification: Introduced an assertion to confirm that the correct and expected number of data shards are processed during benchmarking.
- Dependency Management: Added the
globcrate to facilitate recursive file system pattern matching for dataset discovery.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request addresses a significant issue in the Rust benchmarking setup. By replacing the non-recursive directory traversal with a recursive glob pattern, it ensures all relevant data shards (including test and holdout splits) are included in the benchmark, making the results meaningful again. The addition of the glob dependency is appropriate for this change. The code is well-written, and I have one minor suggestion to improve the new assertion for better debuggability and maintainability.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Pull Request Test Coverage Report for Build 17214041473Details
💛 - Coveralls |
|
| Branch | fix_bench |
| Testbed | ubuntu-latest |
⚠️ WARNING: No Threshold found!Without a Threshold, no Alerts will ever be generated.
Click here to create a new Threshold
For more information, see the Threshold documentation.
To only post results if a Threshold exists, set the--ci-only-thresholdsflag.
Click to view all benchmark results
| Benchmark | Latency | milliseconds (ms) |
|---|---|---|
| ExampleIterator | 📈 view plot | 234.23 ms |
| parallel_map | 📈 view plot | 243.13 ms |
Pull request #171 introduced a problem with benchmarking code rendering it useless. This commit traverses the dataset directory recursively and thus again introduces meaningful benchmarking. Also it makes sure that we traverse the expected number of shards.
This commit includes also test and holdout splits to benchmarking since when we have that data we might use it. This will introduce a regression compared to the first benchmarks and definitely one compared to the empty benchmarks.