Skip to content

Build with AI Support

Yaoxing edited this page Dec 4, 2025 · 4 revisions

The log analysis is capable to use AI for warning/error/fatal log analysis. The logs are grouped before analyzing. Meaning the same type of log (recognized by id in the log) will be only analyzed once.

There are 2 options you can utilize:

  • Use OpenAI API. Need OpenAI API key.
  • Use local AI model. Need to configure some AI options.

To choose which option to use, you need to set the ai_support in config.json to one of the following:

  • local: Local AI model.
  • gpt: Open AI.
  • none: (Default) Disable AI.

You also need to finish the following set ups before you can use this feature.

Build with AI Modules

This only applies when you use the binaries built by make. If you use PyPi or run the source code, you can skip this step.

Because the AI model takes significantly more space (For the model), and also makes the executable much bigger, the AI support requires you enable it when building the executable:

make build-ai

The executable will be named x-ray-ai in dist.

Use OpenAI API

Using OpenAI API requires you to have OpenAI API subscription. You need to export API key before running the tool, and whenever there are W/E/F logs, they will be sent to OpenAI for analysis.

export OPENAI_API_KEY=<your api key>

It's also possible to customize which GPT model to use, but for now it's defined as a const GPT_MODEL in x_ray.ai.

Use Local AI Model

Experimental! For faster analysis, you can also use local AI models.

There are also options that you can customize, also as consts in x_ray.ai:

  • MODEL_NAME: Huggingface model name.
  • MAX_NEW_TOKENS: Number of tokens in the output.

The model will be downloaded the first time you run x-ray.

Clone this wiki locally