Skip to content

Add ai-evaluation by Future AGI#122

Open
SuhaniNagpal7 wants to merge 1 commit into
promptslab:mainfrom
SuhaniNagpal7:add-traceai-ai-evaluation
Open

Add ai-evaluation by Future AGI#122
SuhaniNagpal7 wants to merge 1 commit into
promptslab:mainfrom
SuhaniNagpal7:add-traceai-ai-evaluation

Conversation

@SuhaniNagpal7
Copy link
Copy Markdown

ai-evaluation is an open-source LLM evaluation framework with 50+ metrics, LLM-as-Judge augmentation, guardrail scanners (jailbreak, PII, injection), streaming assessment, and AutoEval pipelines with CI/CD support. Added to Tools and Code > LLM Evaluation Tools.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant