refactor: add support to outputs key in inference request inputs #3405
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
According to KServe documentation on Open Inference Protocol, the server can optionally accept an
outputs
key in the inference request body that instructs how many outputs the inference response should have.Considering a model that takes a tensor NxM as input and output multiple values, the current implementation only returns as many outputs as inputs were originally sent.
Example:
Consider the following inference request body:
If the deployed model outputs multiple values, such as:
KServev2Envelope
will raise anIndexError
in_batch_to_json
, considering thepostprocess
function returns more then one value:This merge request introduces an optional
outputs
key in the inference request, so that the user can specify multiple outputs. For example:Those output names are considered in
_batch_to_json
and conveniently format the output, considering as many outputs sent in the inference request.If no
outputs
key is sent,_batch_to_json
fallback to the original behavior of formatting the output, considering as many inputs sent.Type of change
Please delete options that are not relevant.
Feature/Issue validation/testing
Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.
Test A
Logs for Test A
Test B
Logs for Test B
Checklist: