Skip to content

Conversation

@skrawcz
Copy link
Collaborator

@skrawcz skrawcz commented Mar 7, 2024

Adds some fixes, and then one idea for batch.

See the last commit message for more details.

skrawcz added 3 commits March 6, 2024 21:12
So that slurm functions are run in their own thread/process.
The experiment tracker assumes it's based on inputs, config
and overrides. So if we want to
batch submit a bunch of input permutations and we want
each one tracked, we need to use a distinct driver for each one,
hence the Hamilton within Hamilton.

This means that we are not able to create a single
DAG (driver chaining is a roadmap item) to view it all.

Otherwise:

 - does anything need to be parallel within the inner DAG?

A possible feature to help here would be to make
@subdag take in distinct adapters... or we implement
driver chaining...

Anyway let me know what you think. If you want a single
DAG we could use parameterized_subdag, but that would
come at the cost of visibility with the experiment tracker,
unless a whole permutation of inputs is a single experiment.

There is always of course modifying more of hamilton to
help here.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants