Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: no running event loop when using cl.make_async alongside cl.instrument_openai() #1868

Open
dominpm opened this issue Feb 6, 2025 · 0 comments
Labels
bug Something isn't working needs-triage

Comments

@dominpm
Copy link

dominpm commented Feb 6, 2025

Description

When cl.instrument_openai() is enabled, any OpenAI request goes through Chainlit’s instrumentation. However, if that request is made within a function wrapped by cl.make_async(...), the instrumentation eventually calls asyncio.create_task() from a background thread (which lacks a running event loop). As a result, the following error occurs:

Error: RuntimeError: no running event loop

2025-02-06 11:06:06 - no running event loop
Traceback (most recent call last):
  File "C:\path\to\env\Lib\site-packages\chainlit\utils.py", line 44, in wrapper
    return await user_function(**params_values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\path\to\env\Lib\site-packages\chainlit\callbacks.py", line 118, in with_parent_id
    await func(message)
  File "C:\path\to\project\app.py", line 1052, in main
    await for_custom_starter_translate_doc()
  File "C:\path\to\project\app.py", line 601, in for_custom_starter_translate_doc
    translated_file = await cl.make_async(translate_file)(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\path\to\env\Lib\site-packages\asyncer_main.py", line 365, in wrapper
    return await anyio.to_thread.run_sync(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\path\to\env\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\path\to\env\Lib\site-packages\anyio_backends_asyncio.py", line 2461, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "C:\path\to\env\Lib\site-packages\anyio_backends_asyncio.py", line 962, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\path\to\project\custom_starters\translate.py", line 143, in translate_file
    translated_text = translate_text(text, model_name=model_name, target_language=target_language)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\path\to\project\custom_starters\translate.py", line 50, in translate_text
    response = translation_chain.invoke({"text_to_translate": text})
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\path\to\env\Lib\site-packages\langchain_core\runnables\base.py", line 3016, in invoke
    input = context.run(step.invoke, input, config)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\path\to\env\Lib\site-packages\langchain_core\language_models\chat_models.py", line 284, in invoke
    self.generate_prompt(
  File "C:\path\to\env\Lib\site-packages\langchain_core\language_models\chat_models.py", line 860, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\path\to\env\Lib\site-packages\langchain_core\language_models\chat_models.py", line 690, in generate
    self._generate_with_cache(
  File "C:\path\to\env\Lib\site-packages\langchain_core\language_models\chat_models.py", line 925, in _generate_with_cache
    result = self._generate(
             ^^^^^^^^^^^^^^^
  File "C:\path\to\env\Lib\site-packages\langchain_openai\chat_models\base.py", line 781, in generate
    response = self.client.create(**payload)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\path\to\env\Lib\site-packages\literalai\wrappers.py", line 66, in wrapped
    result = after_func(result, context, *args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\path\to\env\Lib\site-packages\literalai\instrumentation\openai.py", line 362, in after
    on_new_generation(
  File "C:\path\to\env\Lib\site-packages\chainlit\openai_init.py", line 51, in on_new_generation
    asyncio.create_task(step.send())
  File "C:\path\to\env\Lib\asyncio\tasks.py", line 381, in create_task
    loop = events.get_running_loop()
           ^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: no running event loop

Steps to Reproduce

  1. Enable instrumentation with cl.instrument_openai().
  2. Create a function that makes an OpenAI API call (e.g., via a LangChain Chain).
  3. Wrap that function with cl.make_async(...), causing it to run in a worker thread.
  4. Call the wrapped function within an async Chainlit handler.
  5. Observe that the call fails with RuntimeError: no running event loop.

Expected Behavior

Calls to an OpenAI endpoint (instrumented by Chainlit) within a background thread should either succeed or bypass streaming instrumentation gracefully without causing a runtime error.


Actual Behavior

Chainlit’s instrumentation tries to schedule an async task (asyncio.create_task(...)) in a worker thread, where no event loop is running. This raises RuntimeError: no running event loop.


Environment

  • Chainlit version: 1.3.2

I believe the root cause is that anyio.to_thread.run_sync does not share the main event loop with the worker thread. Meanwhile, Chainlit’s instrumentation tries to stream or log messages by calling asyncio.create_task within that thread. This triggers the no running event loop error.

@dosubot dosubot bot added the bug Something isn't working label Feb 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs-triage
Projects
None yet
Development

No branches or pull requests

1 participant