- Developer Guide
- Table of Contents
- Community
- Getting Started
- Configuration Basics
- Service Providers
- Operations Reference
- REST API
- Websocket Events
- Creating Custom Integrations
- Known Issues
Join the Discord for discussions related to this project!
An example configuration is provided to help you get started. Example prompts can be found under the prompts directory.
Customize your AI character's personality and scenario using prompt files:
Directory Structure:
prompts/instructions/- General system/behavior instructionsprompts/characters/- Character personality promptsprompts/scenes/- Scenario/scene prompts
Configuration Options:
instruction_prompt_filename: "default" # Filename without .txt extension
character_prompt_filename: "assistant" # Filename without .txt extension
scene_prompt_filename: "casual" # Filename without .txt extension
character_name: "JAIson" # Name of the character
history_length: 20 # Number of conversation lines to retain
# Optional: Translate usernames. You can probably exclude this.
name_translations:
old-name: new-nameOperations are loaded in the order specified in your configuration file. The pipeline processes: Speech Input → Text Processing → Speech Output
Example Configuration:
operations:
- role: stt # Speech-to-Text
id: fish
- role: t2t # Text-to-Text (LLM)
id: openai
- role: filter_text # Text filters (applied in order)
id: filter_clean
- role: filter_text
id: chunker_sentence
- role: tts # Text-to-Speech
id: azure
- role: filter_audio # Audio filters (applied in order)
id: pitchImportant Notes:
- Only one STT, T2T, and TTS operation can be active (later configs override earlier ones)
- Multiple filters can be active and are applied in the order listed
- Each operation may have additional configuration parameters
Run everything on your own hardware without external API calls.
Compatibility: Limited (depends on model)
Cost: Free (local)
Supports: STT, T2T, TTS
Installation:
-
Download KoboldCPP from releases:
- NVIDIA GPU (e.g. RTX series):
koboldcpp.exefor Windows orkoboldcpp-linux-x64for Linux - Older NVIDIA GPU (CUDA 11):
koboldcpp-oldpc.exefor Windows orkoboldcpp-linux-x64-oldpcfor Linux - Non-NVIDIA (No CUDA):
koboldcpp-nocuda.exefor Windows orkoboldcpp-linux-x64-nocudafor Linux
Place the KoboldCPP executable in the
models/kobold/directory. - NVIDIA GPU (e.g. RTX series):
-
Download models:
- For T2T (LLM): Download GGUF models as described here. Generally, any text-generation GGUF model from HuggingFace will work as long as your hardware meets its requirements.
- For STT (Whisper): Download the desired
.binfile from koboldcpp/whisper- Recommended:
base.enortiny.enfor balanced performance (English only), orsmallfor multilingual support.
- Recommended:
Place all models in
models/kobold/ -
Configure KoboldCPP:
- Run the KoboldCPP executable to open the configuration interface
- Under Quick Launch:
- Select the correct GPU ID from the dropdown
- Disable "Launch Browser"
- Enable "Quiet Mode" (optional, reduces console spam)
- Enable "Use FlashAttention" (improves performance)
- Set Context Size based on your available VRAM (2048-8192+ tokens)
- Click "Browse" and load your GGUF LLM model
- Under Context (optional):
- Enable "Quantize KV Cache" and set to 8-bit or 4-bit to reduce VRAM usage with minimal quality impact
- Under Audio (for STT):
- Click "Browse" and load your Whisper model (
.binfile)
- Click "Browse" and load your Whisper model (
- IMPORTANT: Click "Save" and save the configuration as a
.kcppsfile inmodels/kobold/
-
Update JAIson configuration:
kobold_filepath: "C:\\path\\to\\models\\kobold\\koboldcpp.exe" kcpps_filepath: "C:\\path\\to\\models\\kobold\\myconfig.kcpps"
Note: On Windows, use double backslashes (
\\) in file paths
Compatibility: All platforms
Cost: Free (local)
Supports: TTS
MeloTTS provides fast, high-quality local text-to-speech with full control over voice characteristics.
Recommended for: Users who want consistent latency and are comfortable with model configuration.
Installation
- MeloTTS was automatically installed during setup when you ran
pip install --no-deps -r requirements.no_deps.txt - Browse the MeloTTS repo to see available languages and accents. Then, update the
speaker_idin the JAIson config file. The available speakers are:EN-Default,EN-US,EN-BR,EN_INDIA,EN-AU. Here is an example config for English (Australian accent):- role: tts id: melo config_filepath: null model_filepath: null speaker_id: EN-AU device: cuda language: EN sdp_ratio: 0.7 noise_scale: 0.6 noise_scale_w: 0.8 speed: 1.05
Compatibility: Limited (requires GPU with 8GB+ VRAM for training)
Cost: Free (local)
Supports: Audio filtering (voice conversion)
Installation:
-
Ensure prerequisites:
- Git and Git LFS installed on your system
-
Clone RVC Project:
git clone https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI.git
-
Download model assets:
cd Retrieval-based-Voice-Conversion-WebUI python tools/download_models.py -
Verify download:
- Check
assets/hubert/forhubert_base.pt(NOThubert_inputs.pth)
- Check
-
Copy assets to JAIson:
- Copy entire
assets/folder contents toassets/rvc/in this project
- Copy entire
-
Train or acquire voice model:
- Training: Requires NVIDIA GPU with 8GB+ VRAM. See RVC documentation
- Pre-trained: Find community models online
-
Install voice model:
- Copy
.pthfile toassets/rvc/weights/ - Copy
.indexfile (or folder containing it) tomodels/rvc/- If you only have the
.indexfile, create a folder named after your.pthfile
- If you only have the
- Copy
-
Environment setup:
- Copy
.env-templateif not already done - Ensure RVC section exists (DO NOT MODIFY)
- Copy
Use third-party APIs for high-quality results without local hardware requirements.
Compatibility: All platforms
Cost: Free tier available
Supports: STT, TTS
Setup:
- Go to Azure Portal and sign in
- Navigate to Resource groups
- Click "Create" and configure:
- Use default subscription (free tier for new accounts)
- Select a region close to your location
- Open your new resource group and click "Create"
- Search for "SpeechServices" and create a Speech service:
- Select your resource group
- Choose a nearby region
- Select "Standard S0" (free tier)
- Open the Speech service and scroll to the bottom
- Copy one of the
KEYvalues and theLocation/Region - Update your
.envfile:AZURE_API_KEY=your_key_here AZURE_REGION=your_region_here
Compatibility: All platforms
Cost: Pay-per-use (premium tier not required)
Supports: STT, TTS (voice cloning)
Setup:
- Go to Fish Audio and sign in
- Navigate to the "API" tab
- Purchase API credits if needed
- Go to "API Keys" and click "Create Secret Key"
- Set a long expiry or "Never expires"
- Copy the "Secret Key" from the "API List"
- Update your
.envfile:FISH_API_KEY=your_key_here
Compatibility: All platforms (or OpenAI-compatible APIs)
Cost: Pay-per-use
Supports: STT, T2T, TTS
Setup:
- Go to OpenAI Platform and sign in
- Navigate to "Profile" → "Secrets"
- Create and copy a new API key
- Update your
.envfile:OPENAI_API_KEY=your_key_here
For OpenAI-Compatible APIs: Many services (like Ollama, LocalAI) offer OpenAI-compatible endpoints. Configure the base URL in your YAML:
base_url: "http://localhost:11434/v1" # Example for OllamaConvert spoken audio into text.
- Service: Azure Speech Services (Cloud)
- Cost: Free tier available
- Config:
- role: stt id: azure language: "en-US" # See Azure language codes
- Service: Fish Audio (Cloud)
- Cost: Pay-per-use
- Config:
- role: stt id: fish
- Service: KoboldCPP (Local)
- Cost: Free
- Config:
- role: stt id: kobold suppress_non_speech: true langcode: "en"
- Service: OpenAI or compatible (Cloud/Local)
- Cost: Varies
- Config:
- role: stt id: openai base_url: "https://api.openai.com/v1" # Optional, for custom endpoints model: "whisper-1" language: "en" # See Whisper language codes
Process and generate conversational responses using LLMs.
- Service: KoboldCPP (Local)
- Cost: Free
- Features: Advanced sampler controls
- Config:
- role: t2t id: kobold max_context_length: 4096 # Context length set during Kobold config max_length: 200 # Max response length quiet: true # Quiet mode rep_pen: 1.1 # Repetition penalty - depends on model, but 1.1 is common rep_pen_range: 1024 # Depends on model temperature: 0.7 # Controls randomness: higher is more creative, lower is more deterministic top_k: 40 # Limits the next word selection to the top X most likely candidates top_p: 0.95 # Nucleus sampling: only considers tokens that make up the top X% probability mass typical: 1 # Typical sampling threshold; 1 = disabled
- Service: OpenAI or compatible (Cloud/Local)
- Cost: Varies
- Config:
- role: t2t id: openai base_url: "https://api.openai.com/v1" model: "gpt-4" temperature: 0.7 top_p: 1.0 presence_penalty: 0.0 frequency_penalty: 0.0
Convert text responses into spoken audio.
- Service: MeloTTS (Local)
- Cost: Free
- Quality: Fast, consistent, highly configurable
- Config:
- role: tts id: melo config_filepath: null model_filepath: null speaker_id: "EN-US" # Or whichever voice you prefer device: "cuda" # or "cpu" language: "EN" sdp_ratio: 0.5 # Expressiveness and rhythmic variation noise_scale: 0.6 # Energy and emotional variance noise_scale_w: 0.8 # Cadence and smoothness; "breathiness" speed: 1.0
- Service: Azure Speech Services (Cloud)
- Cost: Free tier available
- Quality: Natural, professional voices
- Config:
- role: tts id: azure voice: "en-US-AshleyNeural" # See Azure voice gallery
- Service: Fish Audio (Cloud)
- Cost: Pay-per-use
- Quality: Voice cloning capability
- Config:
- role: tts id: fish model_id: "your_model_id" backend: "default" normalize: true latency: "normal" # "normal" or "balanced"
- Service: KoboldCPP (Local)
- Cost: Free
- Note: Basic quality, included for completeness
- Config:
- role: tts id: kobold voice: "default"
- Service: OpenAI or compatible (Cloud/Local)
- Cost: Varies
- Config:
- role: tts id: openai base_url: "https://api.openai.com/v1" model: "tts-1" voice: "alloy"
- Service: System TTS (Local)
- Cost: Free
- Note: Uses OS speech synthesizer (SAPI/ESpeak)
- Config:
- role: tts id: pytts voice: "voice_id" # List printed on startup gender: "female"
Post-process generated audio.
- Service: Local processing
- Cost: Free
- Purpose: Adjust voice pitch
- Config:
- role: filter_audio id: pitch pitch_amount: 2 # Semitones (+/-)
- Service: RVC (Local)
- Cost: Free
- Purpose: Voice conversion/transformation
- Config:
- role: filter_audio id: rvc voice: "model_name" f0_up_key: 0 f0_method: "rmvpe" index_rate: 0.75 filter_radius: 3 resample_sr: 0 rms_mix_rate: 0.25 protect: 0.33
Process text before speech synthesis.
- Service: Local processing
- Cost: Free
- Purpose: Clean and normalize text output
- Config:
- role: filter_text id: filter_clean
- Service: Local ML model
- Cost: Free
- Purpose: Detect emotion in responses
- Model: SamLowe/roberta-base-go_emotions
- Config:
- role: filter_text id: emotion_roberta
- Service: Local ML model
- Cost: Free
- Purpose: Content moderation and filtering (remove for uncensored output)
- Model: Koala/Text-Moderation
- Config:
- role: filter_text id: mod_koala
- Service: Local processing
- Cost: Free
- Purpose: Split text into sentences for smoother TTS
- Config:
- role: filter_text id: chunker_sentence
Generate text embeddings for semantic operations.
- Service: OpenAI or compatible (Cloud/Local)
- Cost: Varies
- Config:
- role: embedding id: openai base_url: "https://api.openai.com/v1" model: "text-embedding-3-small"
API spec is made with OpenAPI 3.1.0 standard and can be found api.yaml.
Please read the description of the endpoint you are interested in. If it specifies use of websockets to communicate status or results, you will need to setup a websocket for updates on your request. Using such REST API endpoints are successful when they successfully queue a job and doesn't mean the job itself was successful.
Please see Websocket Events for websocket messages related to each job.
Websockets are used for several reasons
- Ensure all applications are notified of changes even if they didn't request it
- Enable long-lived requests such as responses which may take a few seconds to finish
- Real-time feedback and streaming of responses to reduce latency
- Allow predictable, sequential behavior and locking state during response generation
While each job is unique, they follow similar patterns in terms of generated events. The following are generated in order
- Job start
- Job-specific events (as many as it needs) ...
- One of 2 events
- Job finish
- Job cancelled
Each job is ran sequentially in the order they were queued. Events are also sent in order they were generated. You can expect to receive all events in this predictable order and process 1 job's events at a time.
These events are detailed in the following sections.
response:POST /api/responsecontext_clear:DELETE /api/contextcontext_request_add:POST /api/context/requestcontext_conversation_add_text:POST /api/context/conversation/textcontext_conversation_add_audio:POST /api/context/conversation/audiocontext_custom_register:POST /api/context/customcontext_custom_remove:DELETE /api/context/customcontext_custom_add:PUT /api/context/customoperation_load:POST /api/operations/loadoperation_reload_from_config:POST /api/operations/reloadoperation_unload:POST /api/operations/unloadoperation_use:POST /api/operations/useconfig_load:PUT /api/config/loadconfig_update:PUT /api/config/updateconfig_save:POST /api/config/save
operation_unknown_type: Specified an unknown operation typeoperation_unknown_id: Specified an unknown operation ID for that typeoperation_duplicate: Tried loading a filter that's already loadedoperation_unloaded: Tried using a filter that's not loadedoperation_active: Tried activating an operation that's already active (should never occur, lmk if it does)operation_inactive: Tried deactivating an operation that's already inactive, or using an inactive operation (should never occur, lmk if it does)config_unknown_field: Tried updating or loading a configuration with an invalid fieldconfig_unknown_file: Tried loading a configuration file that doesn't existjob_unknown: Tried starting an invalid job type(should never occur, lmk if it does)job_cancelled: Job was cancelled via the REST API
Signify the start of a job's processing and the arguments provided. The arguments provided are what's included in the original REST API call's body if it was valid to create a job in the first place. For audio_bytes, due to size, it is simply returned as a boolean indicating if it was included.
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"start": { "input argument keyword": "input argument value", ... }
}
}Signify the successful end of a job's processing.
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": true,
"success": true
}
}Signify the unsuccessful end of a job's processing. This may be due to some error during processing, or the result of an application cancelling the job through the REST API.
These will only be emitted once the job has started processing, even if the job was cancelled before then. Therefore, if an application cancels a job, it won't receive the cancelled event until all jobs prior have finished processing and this is put on next. The job is then cancelled immediately and all are notified.
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": true,
"success": false,
"result": {
"type": "error type",
"reason": "error message"
}
}
}Events contain details about generation and are sent in order they appear below.
Immediately after LLM generation but before text filters
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": false,
"result": {
"instruction_prompt": "Instructions for the LLM",
}
}
}{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": false,
"result": {
"history": [
{"type": "raw", "message": "Example of raw user input"},
{"type": "request", "time": 1234, "message": "Example of request message"},
{"type": "chat", "time": 1234, "user": "some user or AI name", "message": "Example of chat message"},
{"type": "tool", "time": 1234, "tool": "some tool name", "message": "Example of tool result"},
{"type": "custom", "time": 1234, "id": "some custom context id", "message": "Example of context"}
],
}
}
}{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": false,
"result": {
"raw_content": "Response from LLM before application of text filters",
}
}
}The following events are looped (once reaching end, loops back to this first event and continuing if more is generated).
This event's results depend on the filters applied. Some operations such as emotion_roberta augment the result by adding emotion alongside the content property for example.
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": false,
"result": {
"content": "Response from LLM after filters",
"other augmented properties": "their value",
...
}
}
}If audio is included, can produce multiple (each chunk for the next packet of audio):
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": false,
"result": {
"audio_bytes": "base64 utf-8 encoded bytes",
"sr": 123,
"sw": 123,
"ch": 123
}
}
}No job-specific events.
Events contain details context added. Only one is generated.
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": false,
"result": {
"timestamp": 12345,
"content": "content as given in arguments",
"line": "[request]: as it appears in the script"
}
}
}Events contain details context added. Only one is generated.
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": false,
"result": {
"user": "name of user associated with line",
"timestamp": 12345,
"content": "content as given in arguments",
"line": "[line]: as it appears in the script"
}
}
}Events contain details context added. Only one is generated.
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": false,
"result": {
"user": "name of user associated with line",
"timestamp": 12345,
"content": "content as given in arguments",
"line": "[line]: as it appears in the script"
}
}
}No job-specific events.
No job-specific events.
Events contain details context added. Only one is generated.
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": false,
"result": {
"timestamp": 12345,
"content": "content as given in arguments",
"line": "[line]: as it appears in the script"
}
}
}Events contain details of loaded operation. One is generated per operation listed.
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": false,
"result": {
"type": "operation type",
"id": "operation id"
}
}
}No job-specific events.
Events contain details of unloaded operation. One is generated per operation listed.
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": false,
"result": {
"type": "operation type",
"id": "operation id"
}
}
}Events contain results from using operation. Multiple of these can be generated if output is streamed. Resulting chunk differs between operations, but usual behavior is generalized under Creating Custom Integrations - Operations.
{
"status": 200,
"message": "job type",
"response": {
"job_id": "job uuid generated when first created",
"finished": false,
"result": { "output chunk property": "output chunk value" }
}
}No job-specific events.
No job-specific events.
No job-specific events.
In case you really want to use an unsupported service, directly implement a model into jaison-core, or just make and share an external application with jaison-core as it's backend, this guide should help you navigate and work on the code like Limit does.
- Some Definitions
- Making Operations
- Adding Managed Processes
- Adding MCP Servers
- Making Applications
- Extending Configuration
- Extending API
Operation - A unit of compute that assists in creating or modifying a response.
Active operation - Operation that has started and can be used.
Inactive operation - Operation that has never been started or has closed and can't be used.
Process - An program that has to run in a separate process from jaison-core. When referred to in this context, it generally means jaison-core is responsible for starting and stopping this process (it is a child process to jaison-core and not another server you manually booted up on the side).
Application - A program that uses the REST API or websocket server of jaison-core.
Application layer - Main implementation of functionality for all REST API endpoints. utils/jaison.py is the file Limit refers to as the "application layer" whereas operations are seen as the "hardware layer".
Event - Message sent through a websocket from jaison-core to an application
Job - Special request created through the REST API. These are tasks to be completed after all previously created tasks are complete. They run one at a time and wait in a queue to be processed next. They outlive the original API request that made them, and they communicate back their results and status through websockets. Each job is associated with a single function in the application layer. Simply, they are queued functions that will produce events.
Everything you need to make a basic operation is in utils/operations.
To make a new operation, make a new file in the directory corresponding to your operation type. In this file, you will be implementing the base operation of that type. You can find that in the base.py in that type's directory.
There are 2 inherited attributes:
op_type: (str) operation type specifier
op_id: (str) the operation id you specified in __init__
There are 6 functions to note:
__init__(self): must be implemented with no additional arguments. In here, you must also call super().__init__(op_id) where op_id will be the id of this operation, unique to the one's of the same type (there are multiple kobold, but each kobold operation is in a different type). You can initialize attributes in here, but this is only ran once and is synchronous.
__call__: DO NOT IMPLEMENT
async start(self): This is where you'll actually setup your operation. Make any connections, asynchronous calls, etc. This will be called every time the operation is loaded. Don't worry about closing before starting as it's handled automatically. Remember to call await super().start() at the beginning.
async close(self): This is where you'll stop your operation. Close any connections and clean it up. This is what's called before every start if the operation has already started. Remember to call await super().close() at the beginning.
async _parse_chunk(self, chunk_in): Extract information from input dictionary chunk_in, validate, and use as input to _generate. There is a default one already implemented, but if you need to parse additional fields not parsed by default (such as emotion for an emotion-based tts), then reimplement it with the same spec as the base.
async _generate(self, **kwargs): Must be implemented. Instead of returning, use yield even if you only use it once. Results from _parse_chunk are used as kwargs here. Perform the calculation and yield the dictionary that contains at least the fields specified in base.py.
All operations are accessed from the OperationManager located in utils/operations/manager.py. Everything here is dynamic except for function loose_load_operation. This is what you'll be modifying.
- Find function
loose_load_operation - Find the case that matches your operation's type
- Extend the if-else block
- the
op_idyou match should be the one you initialized before, and is also the id you use in configuration - add your import statement here, not globally
- return an instance like the rest of them
- the
You can now use your custom operation.
If you have an operation that depends on another running application, you can have jaison-core automatically start and stop that application whenever that operation is in use or not. This is done for KoboldCPP, and can be done for your application as well as long as you can start and get an instance of that process in Python (see utils/processes/processes/koboldcpp.py for example).
Code for managing processes can be found in utils/processes. Process specific code is in utils/processes/processes. You will need to implement BaseProcess found in utils/processes/base.py.
You only need to implement 2 functions. All else should not be modified. Check the base implementation to know which these are.
__init__: Be sure to call super().__init__(process_id) where process_id is the a unique name chose purely for logging purposes.
async reload(self): Starting logic. You will need to start the process and save it to the process attribute. You can also save the port is applicable for use in your operations.
All processes are accessed through the ProcessManager found in utils/processes/manager.py. We need to add it here so it's exposed for use.
- Open
utils/processes/manager.py - Add an entry to the
ProcessTypeenum for your process. - Create a new case in function
load- Import your process in there
- Add a new instance with the enum as the key
- asynchronously call
reloadon that instance
The process does not start until an operation demands it. Likewise, it does not stop until there are no more operations that use it. To setup this relationship, we need to know 2 functions from the ProcessManager:
link(link_id, process_type): Link an operation to that process. This lets the process know it's being used by that operation. link_id is an ID unique across all operations for that specific operation. process_type is the enum you created for your process.
unlink(link_id, process_type): Unlink an operation to that process. This lets the process know the operation no longer needs it (because its closing or just doesn't need it). link_id is an ID unique across all operations for that specific operation. process_type is the enum you created for your process.
When all links are gone, a process will unload itself. Once an operation links up again, the process will start up again. For examples of how this is used, see any kobold operation.
There are additional helper functions you may find useful:
get_process(process_type): Get the instance of that process. Useful if you need direct access to its attributes such as port.
signal_reload(process_type): Have the process restart on the next clock cycle. Typically not needed for an operation and moreso for restarting a process with modified configuration.
signal_unload(process_type): Have the process foribly unload on the next clock cycle. Ignores existing links and just shuts down the process. Typically not needed for an operation and moreso for jaison-core shutdown.
This project has an MCP client built in. Tool calls are generated by a separately configured tool-calling LLM (the one with role mcp) given the current user and system prompt as context. This tool-calling occurs in the response pipeline just before the prompts for the personality LLM is generated. Tools are automatically described and their results appended to the script for any MCP server, and any well documented MCP server will be compatible with this project.
To add an MCP server, add the MCP server directory to models/mcp. For example, I have an MCP server in the file internet.py, so I can put it in models/mcp/internet/internet.py. To configure the project to deploy and use that server, in the yaml config, add a new entry under mcp. For example:
mcp:
- id: example_server
command: python
args: ["example_mcp_server.py"]
cwd: "path/to/server/directory"The id can be any arbitrary, unique id of your choice. The rest are self explanatory. You may use any MCP server (it doesn't have to be Python, and if it is Python, it should work with the current Python version and dependencies.).
Applications can vary in form and function. I [Limit] am not going to tell you how to make your application, but here are some pointers.
All interactions are started through the REST API. I've extensively documented it in using the OpenAPI standard in api.yaml and under the REST API section.
Majority of interactions are job-based. It will most likely be necessary to create a websocket session. It's recommended to create a long-lived websocket connection and iterate through all incoming events indefinitely. Events can be associated with a specific job via job_id and the type of job via the message. For more information on these events from order to structure, see the Websocket Events section.
All configuration lives in utils/config.py. They are accessible all throughout the code by importing this module and fetching the singleton via Config(). Extending this configuration is as simple as adding a new attribute. This attribute must have a type hint and a default value. Now you can configure this value from your config files using the same name as the attribute.
The API is implemented using Quart in utils/server/app_server.py. Every endpoint follows a very similar style, and has an entry for functionality and another entry for handling CORS. Regardless of if your making a job-based or non-job-based API endpoint, you need to create both of these entries.
Example functionality entry:
@app.route('/api/config', methods=['GET'])
async def get_current_config():
passExample CORSE entry:
@app.route('/api/config', methods=['OPTIONS'])
async def preflight_config():
return create_preflight('GET')The CORS entry will always return a call to create_preflight(method) and that suffices.
As for functional entries, their implementation differs if they are job-based or not.
Example
@app.route('/api/config', methods=['GET'])
async def get_current_config():
return create_response(200, f"Current config gotten", JAIson().get_current_config(), cors_header)This is the typical structure of a non-job-based endpoint. This kind of endpoint does not queue a job. It is your traditional REST API endpoint.
create_response normalizes the response returned from the actual function. You can find the implementation in utils/server/common.py. In the snippet, besides the obious of changing defined function name, route, and possibly method, we also need to change the message and function call used in create_response.
Messages here hold no importance beside potential logging in applications.
All functions are defined in the application layer. This is by convention and up to you if you want to do that. You may return any JSON-serializable data-type, and this will appear in the response field of the body.
Example
@app.route('/api/response', methods=['POST'])
async def response():
return await _request_job(JobType.RESPONSE)These need to be defined after the definition of _request_job. Job-based endpoints are a lot more complicated to setup, so bear with me.
Besides the obvious of changing the API endpoint and method, you need to change the JobType to the correct enum. If you're making a new endpoint, chances are you don't have an enum for your job yet. To create a an enum, go to utils/jaison.py and add it to JobType. The string chosen here is what's used in message in events (used to identify which job type event results from).
To associate this enum with a job's function, you need to add a case for it under the function create_job. Copy the format of all other lines, only replacing the enum and the function called (DO NOT AWAIT THIS FUNCTION).
You will need to correctly define your job's function as well. Define a new async function to JAIson as follows
async def my_job_function(
self,
job_id: str,
job_type: JobType,
...
):
...There are several requirements here:
- The only args should be
self,job_id, andjob_type - All arguments you expect to receive from the request body are listed as kwargs. You should not put
**kwargsunless you intend to validate requests bodies in this function. - THIS MUST BE AN ASYNC FUNCTION
Websocket events follow a predictable order, so its best you follow the order of emitted events to avoid breaking applications.
- Start with
await self._handle_broadcast_start(job_id, job_type, {kwargs})- if one of your kwargs is expected to be large, replace it with a shortform or boolean indicator so listeners can confirm paramenters of job
- End with
await self._handle_broadcast_success(job_id, job_type)
You don't need to handle error events as these are done automatically when the coroutine throws an exception.
Implement the rest of your function inbetween. To communicate status and results, use await _handle_broadcast_event(job_id, job_type, {whatever you want}). Whatever you put in the dictionary is what's put in results in the event.
Now your new job-based endpoint is all setup.
jaison-core will not capture kill signals until all websocket connections are closed. Since jaison-core itself does not let go of these connections, the applications themselves must terminate the connection before jaison-core can shutdown.
No data validation alongside insecure connections make this application vulnerable to all sorts of security attacks. Not recommended to host outside of private network.