-
Notifications
You must be signed in to change notification settings - Fork 24
feat!: job arrays #174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat!: job arrays #174
Conversation
WalkthroughThe changes introduce a new method, Changes
Sequence Diagram(s)sequenceDiagram
participant Executor
participant JobExecutorInterface
Executor->>JobExecutorInterface: run_jobs(jobs)
alt Single Job
Executor->>JobExecutorInterface: run_job(job)
else Multiple Jobs
Executor->>Executor: // TODO: Implement job array submission
end
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (1)
snakemake_executor_plugin_slurm/__init__.py (1)
111-124
: Enhance error handling and loggingThe new method should follow the same error handling and logging patterns as the rest of the codebase. Consider adding:
- Error handling for empty job lists
- Logging for job grouping operations
- Consistent error propagation
Here's a suggested implementation:
def run_jobs(self, jobs: List[JobExecutorInterface]): + if not jobs: + return + + self.logger.debug(f"Grouping {len(jobs)} jobs by rule name") try: for rule_name, group in groupby(jobs, key=lambda job: job.rule.name): same_rule_jobs = list(group) + self.logger.debug( + f"Processing {len(same_rule_jobs)} jobs for rule '{rule_name}'" + ) if len(same_rule_jobs) == 1: self.run_job(same_rule_jobs[0]) else: # Temporary implementation: submit jobs individually for job in same_rule_jobs: self.run_job(job) + except Exception as e: + raise WorkflowError(f"Failed to process job group: {str(e)}")🧰 Tools
🪛 Ruff (0.8.0)
114-114: Using the generator returned from
itertools.groupby()
more than once will do nothing on the second usage(B031)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
snakemake_executor_plugin_slurm/__init__.py
(2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.0)
snakemake_executor_plugin_slurm/__init__.py
114-114: Using the generator returned from itertools.groupby()
more than once will do nothing on the second usage
(B031)
🔇 Additional comments (1)
snakemake_executor_plugin_slurm/__init__.py (1)
8-8
: LGTM!
The groupby
import is correctly placed with other standard library imports and is required for the new job array functionality.
else: | ||
# TODO submit as array | ||
# share code with run_job | ||
|
||
# TODO in the future: give a hint to the scheduler to select preferably | ||
# many jobs from the same rule if possible, in order to have | ||
# more efficient array jobs. This should be somehow tunable, because | ||
# it might contradict other efficiency goals. | ||
... | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Implement placeholder for array jobs
The current implementation uses an ellipsis (...) as a placeholder, which will raise a NotImplementedError. Until the array job submission is implemented, we should handle these jobs individually.
Apply this improvement to handle multi-job groups temporarily:
else:
# TODO submit as array
# share code with run_job
- ...
+ # Temporary implementation: submit jobs individually until array support is added
+ for job in same_rule_jobs:
+ self.run_job(job)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
else: | |
# TODO submit as array | |
# share code with run_job | |
# TODO in the future: give a hint to the scheduler to select preferably | |
# many jobs from the same rule if possible, in order to have | |
# more efficient array jobs. This should be somehow tunable, because | |
# it might contradict other efficiency goals. | |
... | |
else: | |
# TODO submit as array | |
# share code with run_job | |
# TODO in the future: give a hint to the scheduler to select preferably | |
# many jobs from the same rule if possible, in order to have | |
# more efficient array jobs. This should be somehow tunable, because | |
# it might contradict other efficiency goals. | |
# Temporary implementation: submit jobs individually until array support is added | |
for job in same_rule_jobs: | |
self.run_job(job) |
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Plan: support aggregation of jobs in the Snakemake scheduler before they are send to the executor. This can be done by supporting a new standard resource This way we avoid the case that jobs drop into the executor in a too fine-grained way. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
snakemake_executor_plugin_slurm/__init__.py (1)
177-185
:⚠️ Potential issueImplement temporary handling for multi-job groups.
The current implementation uses an ellipsis (...) as a placeholder, which will raise a NotImplementedError. Until the array job submission is implemented, we should handle these jobs individually.
Apply this improvement to handle multi-job groups temporarily:
else: # TODO submit as array # share code with run_job - - # TODO in the future: give a hint to the scheduler to select preferably - # many jobs from the same rule if possible, in order to have - # more efficient array jobs. This should be somehow tunable, because - # it might contradict other efficiency goals. - ... + # Temporary implementation: submit jobs individually until array support is added + for job in same_rule_jobs: + self.run_job(job)
🧹 Nitpick comments (2)
snakemake_executor_plugin_slurm/__init__.py (2)
172-172
: Add docstring to document the method's purpose and parameters.The method lacks documentation explaining its purpose and parameters.
Apply this improvement:
def run_jobs(self, jobs: List[JobExecutorInterface]): + """Process and submit multiple jobs, potentially as job arrays. + + Args: + jobs: List of jobs to be submitted for execution. + """
173-173
: Fix unused loop variable.The loop control variable
rule_name
is not used within the loop body.Apply this improvement:
- for rule_name, group in groupby(jobs, key=lambda job: job.rule.name): + for _rule_name, group in groupby(jobs, key=lambda job: job.rule.name):🧰 Tools
🪛 Ruff (0.8.2)
173-173: Loop control variable
rule_name
not used within loop bodyRename unused
rule_name
to_rule_name
(B007)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
snakemake_executor_plugin_slurm/__init__.py
(2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
snakemake_executor_plugin_slurm/__init__.py
173-173: Loop control variable rule_name
not used within loop body
Rename unused rule_name
to _rule_name
(B007)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: testing
🔇 Additional comments (1)
snakemake_executor_plugin_slurm/__init__.py (1)
9-9
: LGTM!The
groupby
import is correctly placed with other standard library imports and is necessary for the new functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
snakemake_executor_plugin_slurm/__init__.py (1)
179-185
:⚠️ Potential issueImplement placeholder for array jobs
The current implementation uses an ellipsis (...) as a placeholder, which will raise a NotImplementedError. Until the array job submission is implemented, we should handle these jobs individually.
else: # TODO submit as array # share code with run_job # TODO in the future: give a hint to the scheduler to select preferably # many jobs from the same rule if possible, in order to have # more efficient array jobs. This should be somehow tunable, because # it might contradict other efficiency goals. - ... + # Temporary implementation: submit jobs individually until array support is added + for job in same_rule_jobs: + self.run_job(job)
🧹 Nitpick comments (1)
snakemake_executor_plugin_slurm/__init__.py (1)
173-173
: Rename unused variablerule_name
to_rule_name
The variable
rule_name
is defined in the loop statement but never used within the loop body. To follow Python conventions, prefix unused variables with an underscore.- def run_jobs(self, jobs: List[JobExecutorInterface]): - for rule_name, group in groupby(jobs, key=lambda job: job.rule.name): + def run_jobs(self, jobs: List[JobExecutorInterface]): + for _rule_name, group in groupby(jobs, key=lambda job: job.rule.name):🧰 Tools
🪛 Ruff (0.8.2)
173-173: Loop control variable
rule_name
not used within loop bodyRename unused
rule_name
to_rule_name
(B007)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
snakemake_executor_plugin_slurm/__init__.py
(2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
snakemake_executor_plugin_slurm/__init__.py
173-173: Loop control variable rule_name
not used within loop body
Rename unused rule_name
to _rule_name
(B007)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: testing
def run_jobs(self, jobs: List[JobExecutorInterface]): | ||
for rule_name, group in groupby(jobs, key=lambda job: job.rule.name): | ||
same_rule_jobs = list(group) # Materialize the generator | ||
if len(same_rule_jobs) == 1: | ||
self.run_job(same_rule_jobs[0]) | ||
else: | ||
# TODO submit as array | ||
# share code with run_job | ||
|
||
# TODO in the future: give a hint to the scheduler to select preferably | ||
# many jobs from the same rule if possible, in order to have | ||
# more efficient array jobs. This should be somehow tunable, because | ||
# it might contradict other efficiency goals. | ||
... | ||
def run_job(self, job: JobExecutorInterface): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
❓ Verification inconclusive
Verify implementation aligns with PR objectives
The PR mentions implementing a min_jobs
resource for job aggregation, but this resource isn't referenced in the current implementation. Consider how this resource will be integrated with the job grouping logic.
🏁 Script executed:
#!/bin/bash
# Search for any references to min_jobs in the codebase
rg "min_jobs" --type python
Length of output: 58
I'll run a revised shell script using the proper file type flag to search for references to min_jobs
in Python files. This will help verify whether the intended resource is referenced anywhere in the codebase.
🏁 Script executed:
#!/bin/bash
# Search for any references to 'min_jobs' in Python files using the correct file type flag.
rg "min_jobs" --type py
Length of output: 23
Action Required: Integrate the min_jobs
resource with job aggregation logic
Our verification confirms that there are no references to the min_jobs
resource anywhere in the Python codebase. Since the PR explicitly mentions implementing a min_jobs
resource for job aggregation, the current job grouping logic in snakemake_executor_plugin_slurm/__init__.py
(lines 172-186) does not reflect this change.
- Location:
snakemake_executor_plugin_slurm/__init__.py
(within therun_jobs
method) - Action: Please update the implementation to incorporate the
min_jobs
logic into the job aggregation process, or provide additional documentation explaining how this resource will be integrated in subsequent changes.
🧰 Tools
🪛 Ruff (0.8.2)
173-173: Loop control variable rule_name
not used within loop body
Rename unused rule_name
to _rule_name
(B007)
Summary by CodeRabbit