Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .claude-plugin/marketplace.json
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,13 @@
"source": "./skills/hugging-face-vision-trainer",
"skills": "./",
"description": "Train and fine-tune object detection models (RTDETRv2, YOLOS, DETR and others) and image classification models (timm and transformers models — MobileNetV3, MobileViT, ResNet, ViT/DINOv3) using Transformers Trainer API on Hugging Face Jobs infrastructure or locally. Includes COCO dataset format support, Albumentations augmentation, mAP/mAR metrics, trackio tracking, hardware selection, and Hub persistence."
},

{
"name": "ai-status-monitor",
"source": "./skills/ai-status-monitor",
"skills": "./",
"description": "Monitor real-time AI provider status, model availability, pricing, trending usage, benchmark rankings, and incidents from aistatus.cc."
}
]
}
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@ This repository contains a few skills to get you started. You can also contribut
<!-- BEGIN_SKILLS_TABLE -->
| Name | Description | Documentation |
|------|-------------|---------------|
| `ai-status-monitor` | Monitor real-time AI provider status, model availability, pricing, trending usage, benchmark rankings, and incidents from aistatus.cc. | [SKILL.md](skills/ai-status-monitor/SKILL.md) |
| `gradio` | Build Gradio web UIs and demos in Python. Use when creating or editing Gradio apps, components, event listeners, layouts, or chatbots. | [SKILL.md](skills/huggingface-gradio/SKILL.md) |
| `hf-cli` | Execute Hugging Face Hub operations using the hf CLI. Download models/datasets, upload files, manage repos, and run cloud compute jobs. | [SKILL.md](skills/hf-cli/SKILL.md) |
| `hugging-face-dataset-viewer` | Explore, query, and extract data from any Hugging Face dataset using the Dataset Viewer REST API and npx tooling. Zero Python dependencies — covers split/config discovery, row pagination, text search, filtering, SQL via parquetlens, and dataset upload via CLI. | [SKILL.md](skills/hugging-face-dataset-viewer/SKILL.md) |
Expand Down
2 changes: 2 additions & 0 deletions agents/AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
You have additional SKILLs documented in directories containing a "SKILL.md" file.

These skills are:
- ai-status-monitor -> "skills/ai-status-monitor/SKILL.md"
- gradio -> "skills/huggingface-gradio/SKILL.md"
- hf-cli -> "skills/hf-cli/SKILL.md"
- hugging-face-dataset-viewer -> "skills/hugging-face-dataset-viewer/SKILL.md"
Expand All @@ -20,6 +21,7 @@ IMPORTANT: You MUST read the SKILL.md file whenever the description of the skill

<available_skills>

ai-status-monitor: `Monitor real-time status for major AI providers, search model availability and pricing, and report trending models, benchmark rankings, and incidents from aistatus.cc. Use when users ask if a provider is down, whether a model is available, or what models are trending/high-performing.`
gradio: `Build Gradio web UIs and demos in Python. Use when creating or editing Gradio apps, components, event listeners, layouts, or chatbots.`
hf-cli: `"Hugging Face Hub CLI (`hf`) for downloading, uploading, and managing repositories, models, datasets, and Spaces on the Hugging Face Hub. Replaces now deprecated `huggingface-cli` command."`
hugging-face-dataset-viewer: `Use this skill for Hugging Face Dataset Viewer API workflows that fetch subset/split metadata, paginate rows, search text, apply filters, download parquet URLs, and read size or statistics.`
Expand Down
61 changes: 61 additions & 0 deletions skills/ai-status-monitor/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
name: ai-status-monitor
description: Monitor real-time status for major AI providers, search model availability and pricing, and report trending models, benchmark rankings, and incidents from aistatus.cc. Use when users ask if a provider is down, whether a model is available, or what models are trending/high-performing.
---

# AI Status Monitor (aistatus.cc)

Use this skill to query public API endpoints from `https://aistatus.cc`.

## Endpoint map

- `GET /api/all`
Returns a full snapshot:
- `status`: `{ providerStatus: [...], totalModels }`
- `trending`: `{ week, models: [...] }`
- `mmlu`: `[ ... ]` (top benchmark models)
- `incidents`: `[ ... ]` (recent status transitions)
- `lastUpdated`

- `GET /api/status`
Returns provider-level status:
- `providerStatus[]`: `{ slug, name, modelCount, status, statusDetail, ... }`
- `totalModels`
- `lastUpdated`

- `GET /api/model?q=<query>`
Returns model search results:
- `query`, `count`, `models[]`, `lastUpdated`
- model entries include `id`, `name`, `provider`, `context_length`, `pricing`, `modality`

- `GET /api/trending`
Returns OpenRouter weekly usage ranking:
- `week`
- `models[]`: `{ rank, id, name, provider, tokens, tokensFormatted, ... }`
- `lastUpdated`

- `GET /api/mmlu`
Returns benchmark leaderboard:
- `models[]`: `{ rank, name, avgScore, mmluPro, mathLvl5, gpqa, params, type }`
- `lastUpdated`

- `GET /api/incidents`
Returns recent provider status changes:
- `incidents[]`: `{ slug, name, from, to, detail, timestamp }`

## Request strategy

1. Default to `GET /api/all` for broad or ambiguous requests.
2. Use `GET /api/status` for provider uptime/outage questions.
3. Use `GET /api/model?q=...` for model lookup, context length, or price checks.
4. Use `GET /api/trending` for popularity/usage ranking questions.
5. Use `GET /api/mmlu` for benchmark leaderboard questions.
6. Use `GET /api/incidents` for outage history or recent changes.

## Response rules

- Include the API timestamp field (`lastUpdated` or event `timestamp`) when available.
- For provider status, report `status`, `statusDetail` (if present), and `modelCount`.
- For model search, show top matches first and include provider status plus pricing.
- For rankings, preserve rank order and clearly label the metric (`tokens` or `avgScore`).
- If the query is vague, return a concise summary from `/api/all` and ask one follow-up question.