You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 18, 2024. It is now read-only.
Based on a previous PR Assistant responds to message #9, we have managed to introduce the concept of thread runs, where we await for a response based on the content of the thread.
We should have the option to stream the answer back to the consumer of the api to accommodate for the slow response time of LLMs.
While the adapters are currently generators, we have not supposed streaming over the internet connection yet.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Problem Statement
thread
runs, where we await for a response based on the content of the thread.The text was updated successfully, but these errors were encountered: