Skip to content

Feedback on your creating-skills skill #28

@RichardHightower

Description

@RichardHightower

I was curious how you approached the skill creation workflow—there's a lot of implicit knowledge that's hard to document, but your rubric breaks it down in a way that actually makes sense for builders.

Links:

The TL;DR

You're at 84/100, solid B-grade territory. This is graded against Anthropic's skill architecture standards—specifically how well you structure information layering, discoverability, and practical utility. Your strongest area is actually Utility (18/20)—the skill solves real problems with good feedback loops. Weakest is Spec Compliance (12/15), mainly because your description only includes 1-2 trigger phrases when it could be more specific.

What's Working Well

  • Reference architecture is chef's kiss—you're using the references/ directory exactly right (design-patterns.md, skill-activation.md, two-stage-review.md) to keep SKILL.md focused. That's the PDA sweet spot most skills miss.
  • Discoverability is tight—"Use when: create skill, update skill, SKILL.md format, verify.py" in your description means agents will actually activate this at the right time. Trigger terms are specific, not vague.
  • Verification workflow actually loops—Step 6 includes iteration (run→check→fix), and you reference verify.py for diagnostics. That's a feedback loop that actually works instead of just "here's a checklist."

The Big One: Missing Complete End-to-End Example

This is what's costing you the most utility points (could gain ~3 points here). Right now you have scattered examples—frontmatter format at lines 184-198, skill anatomy at lines 51-62—but no complete working skill example showing all pieces together.

A builder reading this needs to see: "Here's a real, minimal skill with SKILL.md, a script, a reference file, and frontmatter that all work together." Right now they get fragments and have to mentally assemble it.

Fix: Add a "## Complete Example" section showing a minimal but real skill (maybe pdf-rotator/ or text-formatter/) with actual content in SKILL.md, scripts/, and references/. Make it copy-paste-able as a starting point.

Other Things Worth Fixing

  1. No table of contents (lines 1-10)—253 lines is too long without one. Add a TOC after the intro. Gains ~2 points.

  2. Redundant explanations bloat the file—Lines 68-69 re-explain frontmatter already defined at line 54. Lines 99-108 spend way too many words on "what not to include" (README.md, etc.). Tighten these sections and save ~100 tokens. Gains ~2 points.

  3. Zero troubleshooting guidance—Your verification section (lines 243-248) only covers verify.py failures. What if a skill doesn't trigger when expected? What if bundled scripts fail at runtime? Add a "## Troubleshooting" section with real failure modes. Gains ~2 points.

  4. Second-person voice slips—Lines 31, 200, 249 use "you" ("you should...") instead of imperative. Change to "Use..." or "Ensure..." for consistency. Gains ~1 point.

Quick Wins

  • Add TOC (~2 points) — Quick structural fix
  • Complete working example (~3 points) — Biggest bang for buck, addresses utility gap
  • Troubleshooting section (~2 points) — Real value for builders hitting issues
  • Trim redundancy (~2 points) — Tighten explanations, respect token budget
  • Fix voice consistency (~1 point) — Easy cleanup

These four changes would push you to 93-94/100. The example is the heavyweight—do that one first.


Checkout your skill here: SkillzWave.ai | SpillWave We have an agentic skill installer that install skills in 14+ coding agent platforms. Check out this guide on how to improve your agentic skills.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions