The openai_chat_interface.py
file contains the implementation of the OpenAI_LLM
class, which provides an interface to interact with OpenAI's language models and manage chat interactions.
To use the OpenAI_LLM
class, import it as follows:
from openai_chat_interface import OpenAI_LLM, create_message, calculate_cost
Create an instance of OpenAI_LLM
:
llm = OpenAI_LLM(api_key=None, model="gpt-3.5-turbo", temperature=1.0, system_message='You are a helpful assistant. Answer the user query', user_message='{query}', functions=None, function_call=None)
The constructor takes the following parameters:
api_key
(optional): Your OpenAI API key. If not provided, it will attempt to load from the environment variableOPENAI_API_KEY
.model
(optional): The OpenAI language model to use. Default is "gpt-3.5-turbo".temperature
(optional): The temperature value for generating responses. Default is 1.0.system_message
(optional): The system message to be used in chat interactions. Default is'You are a helpful assistant. Answer the user query'
.user_message
(optional): The user message template to be used in chat interactions. Default is'{query}'
.functions
(optional): The list of functions to be used in chat interactions. Default isNone
.function_call
(optional): The function call type to be used in chat interactions. Default is"auto"
.
Before running the model, you can add messages to the chat history. Use the add_messages
method to add a list of messages:
messages = [
create_message("user", "What is the capital of France?"),
create_message("assistant", "The capital of France is Paris.")
]
llm.add_messages(messages)
To run the model and generate a response, use the run
method:
llm.run()
By default, this will use the user message template provided during initialization and generate a response based on the chat history.
You can also customize the user message used in this specific run
call:
llm.run(messages=[create_message("user", "What is the capital of Spain?")])
In this case, the provided messages will be used for this specific run
call instead of the chat history.
After running the model, you can access the response content using the response_content
property:
response_content = llm.response_content
print(response_content)
You can access the response message object using the response_message
property:
response_message = llm.response_message
print(response_message)
If the response contains a function call, you can access the function call object using the response_function
property:
response_function = llm.response_function
print(response_function)
If the response contains a function call, you can access the function name using the response_function_name
property:
response_function_name = llm.response_function_name
print(response_function_name)
If the response contains a function call, you can access the function arguments using the response_function_arguments
property:
response_function_arguments = llm.response_function_arguments
print(response_function_arguments)
After running the model, you can retrieve the finish reason using the finish_reason
property:
finish_reason = llm.finish_reason
print(finish_reason)
To clear the chat history, use the clear_memory
method:
llm.clear_memory()
The package provides example files that demonstrate the usage of the OpenAI_LLM
class. You can refer to these examples to see how to utilize the chat interface effectively.
decorator_example.py
: Demonstrates the usage of the@openaifunc
decorator and theOpenAI_functions
class.chat_and_func_example_single_tool_use.py
: Demonstrates the usage of theOpenAI_LLM
class with a single tool use.chat_and_func_example_multi_tool_use.py
: Demonstrates the usage of theOpenAI_LLM
class with multiple tool uses.
Feel free to explore and modify these example files to understand how to use the OpenAI_LLM
class effectively.
This Python package provides classes OpenAI_functions
, OpenAI_function_collection
, and OpenAI_LLM
to dynamically load and manage Python functions marked with the @openaifunc
decorator, and interact with OpenAI's language models. This utility can be used to organize and call functions from different modules easily, and to create chat interfaces with OpenAI's models.
First, import the package at the top of your Python code:
from openai_decorator import OpenAI_functions, openaifunc
Then, add a @openaifunc
decorator to the functions you want to manage:
@openaifunc
def add_numbers(a: int, b: int) -> int:
"""
This function adds two numbers.
"""
return a + b
Next, create an instance of OpenAI_functions
by loading a Python file containing the decorated functions:
math_functions = OpenAI_functions.from_file("path/to/math_funcs.py")
You can now access the list of functions, mappings, and call the functions:
print(math_functions.func_list)
print(math_functions.func_mapping)
result = math_functions.call_func({"name": "add_numbers", "arguments": "{ \"a\": 3, \"b\": 4 }"})
print(result) # Output: 7
Import the OpenAI_function_collection
class:
from openai_decorator import OpenAI_function_collection
Create an instance by loading a folder containing Python files with decorated functions:
all_functions = OpenAI_function_collection.from_folder("path/to/tools")
You can now access the combined function lists, mappings, descriptions, and call the functions across all loaded files:
print(all_functions.func_list)
print(all_functions.func_description)
print(all_functions.func_mapping)
result = all_functions.call_func({"name": "add_numbers", "arguments": "{ \"a\": 5, \"b\": 5 }"})
print(result) # Output: 10
The OpenAI_LLM
class provides an interface to interact with OpenAI's language models and manage chat interactions.
First, import the required classes and functions:
from openai_chat_interface import OpenAI_LLM, create_message, calculate_cost
Create an instance of OpenAI_LLM
:
llm = OpenAI_LLM(api_key="your-api-key", system_message='You are a helpful assistant. Answer the user query')
You can run the model with user input:
user_input = "What's the weather like today?"
llm.run(query=user_input)
print(llm.response_content) # Outputs the model's response
You can add messages, clear messages, and perform various operations with the chat interface. See the chat_example.py
file for a complete example.
Function descriptions are extracted from the docstrings within the Python files. You can write standard Python docstrings to describe your functions:
@openaifunc
def multiply_numbers(a: int, b: int) -> int:
"""
This function multiplies two numbers.
:param a: The first number to multiply
:param b: The second number to multiply
"""
return a * b
The OpenAI_functions
class will automatically parse the docstrings and include them in the func_description
property.
The OpenAI_function_collection
class allows you to manage multiple OpenAI_functions
instances in one place. You can load functions from multiple files or an entire folder and access them all through the collection instance.
The repository includes example files demonstrating the usage of these classes, including math_funcs.py
, weather_funcs.py
, main.py
, and chat_example.py
. Feel free to explore and modify them to understand how to use the package effectively.