We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
name: "ensemble" platform: "ensemble" max_batch_size: 128 input [ { name: "text_input" data_type: TYPE_STRING dims: [ -1 ] } ] output [ { name: "text_output" data_type: TYPE_STRING dims: [ -1, -1 ] }, { name: "output_token_lengths" data_type: TYPE_INT32 dims: [ -1, -1 ] } ] ensemble_scheduling { step [ { model_name: "preprocessing" model_version: -1 input_map { key: "QUERY" value: "text_input" } output_map { key: "REQUEST_INPUT_LEN" value: "_REQUEST_INPUT_LEN" } }, { model_name: "tensorrt_llm" model_version: -1 input_map { key: "input_lengths" value: "_REQUEST_INPUT_LEN" } output_map { key: "output_ids" value: "_TOKENS_BATCH" } }, { model_name: "postprocessing" model_version: -1 input_map { key: "TOKENS_BATCH" value: "_TOKENS_BATCH" } output_map { key: "OUTPUT" value: "text_output" } output_map { key: "OUTPUT1" value: "output_token_lengths" } } ] }
This is my simplified description of the ensemble, I want to know how to put
ensemble_scheduling --> model_name "preprocessing" --> output_map --> value: "_REQUEST_INPUT_LEN"
Take out this value
Are there any examples
The text was updated successfully, but these errors were encountered:
我的问题跟 #71 一样, 先close了
Sorry, something went wrong.
Here is a demo, you can pass out the REQUEST_INPUT_LEN in the middle, test ok I've tested it and it's okay https://github.com/triton-inference-server/tensorrtllm_backend/blob/57de9f572f75f61fe17b668eea1430b030e1b721/all_models/inflight_batcher_llm/postprocessing/config.pbtxt
No branches or pull requests
This is my simplified description of the ensemble,
I want to know how to put
ensemble_scheduling --> model_name "preprocessing" --> output_map --> value: "_REQUEST_INPUT_LEN"
Take out this value
Are there any examples
The text was updated successfully, but these errors were encountered: