Skip to content

mg52/ai-analyzer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Analyzer Service

The Analyzer Service is an AI evaluation system designed to transform natural language into consistent, interpretable numeric scores (0–100) using LLM-powered prompt templates.

At a high level:

  • You describe what you want to evaluate (e.g., “sentiment analyzer”, “toxicity detector”, “urgency score”).
  • The system auto-generates a high-quality evaluation prompt using an LLM (via Ollama).
  • It stores that prompt in Redis as a reusable evaluation function.
  • You then send text to /analyze/{endpoint} and the model returns a strict integer score between 0 and 100, where:
    • 0 = worst / negative / low / pessimistic
    • 100 = best / positive / high / optimistic

This creates a reusable, scalable way to turn any qualitative task into a numeric scoring API.

Why This Service Exists

Traditional LLMs provide paragraphs of answers, but sometimes you need:

  • A single number
  • Consistent scoring rules
  • A simple API for rating text
  • Reusable evaluation logic
  • A local, offline-capable system without OpenAI APIs

This service solves that by:

  1. Automatically generating evaluation prompts
  2. Persisting them as configurable scoring functions
  3. Using those prompts to evaluate user input consistently
  4. Always returning a 0–100 score—no text, no extra words

You can build endpoints like:

  • /analyze/sentiment → 0–100
  • /analyze/toxicity → 0–100
  • /analyze/urgency → 0–100
  • /analyze/emotion → 0–100
  • /analyze/professionalism → 0–100

Each endpoint uses its own customizable prompt.

How It Works

Step 1 — Generate

POST /generate:

{
  "endpoint": "sentiment",
  "content": "sentiment analyzer"
}

Ollama generates a task-specific evaluation prompt, which is stored in Redis:

{"prompt":"Analyze the following text and assign a sentiment score from 0 to 100, where 0 represents the worst possible sentiment (extremely negative, bad, or pessimistic) and 100 represents the best possible sentiment (extremely positive, good, or optimistic). Consider the overall tone, intensity of emotion, and subtle nuances of the text.  Pay close attention to any sarcasm, irony, or double meanings.  A score of 50 represents neutral sentiment.  Provide only a single integer between 0 and 100.\n"}

Step 2 — Analyze

POST /analyze/sentiment:

{ "content": "I love this!" }

The system:

  • Retrieves the stored prompt from Redis
  • Appends the input text
  • Sends it to the LLM
  • Extracts the integer score from the response
  • Returns:
{ "score": 94 }

Always 0–100. Always an integer.

Architecture Overview

Example for a sentiment analyzer:

Docker Run Commands

docker network create analyzer-net

docker run -d \
  --name redis \
  --network analyzer-net \
  -p 6379:6379 \
  -v redisdata:/data \
  redis:latest \
  redis-server \
    --requirepass "thepassword" \
    --appendonly yes \
    --appendfsync everysec


docker build -t analyzer-service:latest .

docker run -d \
  --name analyzer \
  --network analyzer-net \
  -p 11434:11434 \
  -p 8080:8080 \
  -p 8000:8000 \
  -v ollama-home:/home/appuser/.ollama \
  -e REDIS_ADDR=redis:6379 \
  -e REDIS_PASSWORD=thepassword \
  analyzer-service:latest

localhost:8000 gives a friendly UI to interact with the API.

Or run locally:

python3 -m http.server 8001

cURL Commands

curl -v "http://localhost:8080/health"

curl --location 'http://localhost:8080/generate' \
--header 'Content-Type: application/json' \
--data '{
  "endpoint": "sentiment",
  "content": "sentiment analyzer"
}'

curl -X POST http://localhost:8080/analyze/sentiment \
  -H "Content-Type: application/json" \
  -d '{"content":"I love you!"}'

License

MIT © mg52

About

No description or website provided.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published