A Claude Code skill that adds a rubric-based eval layer to any agent project. Framework-agnostic — generates rubric,...
Copy the install, test the workflow, then decide if it earns a permanent slot.
The signal is softer here. Treat it like a pattern source unless it solves a very specific gap.
Copy the install, test the workflow, then decide if it earns a permanent slot.
Not hard to test, not trivial to unwind. Worth trying if it closes a sharp gap.
GitHub health unknown. no security policy. 0 open issues make this testable, but not something to trust blind.
AI Agent
Claude Code
Model
Claude
Build Time
Instant
Fastest way to find out if eval-layer belongs in your setup.
Copy the install command, run a real test, and back it out cleanly if it slows you down.
# Visit: https://github.com/erezweinstein5/eval-layerRun this first. You will know quickly if the workflow earns a permanent slot.
# No automated removal — visit https://github.com/erezweinstein5/eval-layerNo messy cleanup loop. If it misses, remove it and keep moving.
Install Location
~/ └─ .claude/ ├─ commands/ ├─ agents/ │ └─ eval-layer/ ← installs here └─ settings.json
A Claude Code skill that adds a rubric-based eval layer to any agent project. Framework-agnostic — generates rubric, test cases, judge prompt, and harness. Returns a weighted score plus a judge-leniency signal.