Skip to content
This repository was archived by the owner on Sep 18, 2024. It is now read-only.

Support streaming as part of thread runs and LLM generations #38

Open
multipletwigs opened this issue Mar 30, 2024 · 0 comments
Open

Support streaming as part of thread runs and LLM generations #38

multipletwigs opened this issue Mar 30, 2024 · 0 comments

Comments

@multipletwigs
Copy link
Collaborator

multipletwigs commented Mar 30, 2024

Problem Statement

  1. Based on a previous PR Assistant responds to message #9, we have managed to introduce the concept of thread runs, where we await for a response based on the content of the thread.
  2. We should have the option to stream the answer back to the consumer of the api to accommodate for the slow response time of LLMs.
  3. While the adapters are currently generators, we have not supposed streaming over the internet connection yet.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant