-
Notifications
You must be signed in to change notification settings - Fork 693
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core feature] Map node array tasks do not support @dynamic
, @eager
, or @workflow
nodes
#5684
Comments
Currently, A potential workaround is to use dynamic tasks within a static workflow, but this does not fully address the need for partial result consumption. The Flyte team is working on extending support for mapping over different Flyte node types, including dynamic and subworkflows, through the introduction of Would you like more details on the current limitations or the upcoming Sources: This is an AI-generated response and your feedback is appreciated! Please leave a 👍 if this is helpful and 👎 if it is not. Mention @RunLLM to ask follow-up questions. See runllm.com for more! |
Wow, what timing - I just came across this same situation myself. Glad to see it's on the table. |
@fg91 @rovangju Reason is, the fundamental way, launchplans are mapped over needs a large state storage system, which does not fit default array_nodes storage system. Union has a different engine, which can store data efficiently and in higher resolution, does surpassing these limits. Its a large lift to support in oss and we do not currently plan to support - but we will definitely visit this in a few months, as we upstream some of our learnings from Union. |
Motivation: Why do you think this is important?
Map node array tasks allow users to map a
@task
over a list of inputs. Currently, other node types, i.e.@dynamic
,@eager
, and@workflow
are not supported.As a user, I would like to implement the following logic that today cannot be expressed in Flyte due to this limitation:
In this example, the task
consume_partial_results_task
can consume a subset of the outputs ofdynamic_subwf
(or a normal sub workflow) which is being mapped even if not all (dynamic) sub workflows are successful.Currently, consuming partial results can only be done when mapping a
@task
but not all logic can be compressed into a single task.Goal: What should the final outcome look like, ideally?
Support every node type in
map_task
.Describe alternatives you've considered
One can execute multiple (dynamic) sub workflows and tolerate failures of some of them by using
WorkflowFailurePolicy.FAIL_AFTER_EXECUTABLE_NODES_COMPLETE
but this does not allow consuming partial results of the successful (dynamic) sub workflows.Propose: Link/Inline OR Additional context
No response
Are you sure this issue hasn't been raised already?
Have you read the Code of Conduct?
The text was updated successfully, but these errors were encountered: