Stop your AI agent from making the same mistakes twice.
Your AI agent forgets everything between sessions. SLOPE gives it a structured routine — so every sprint builds on the last one. Scores what happened, tracks trends over time, and blocks known mistakes before they recur. Works with Claude Code, Cursor, Windsurf, and 5 more.
AI Agents Are Powerful. But Chaotic.
Without structure, session 50 looks exactly like session 1. SLOPE is designed around the three constraints that make AI agents unreliable at scale.
Context loss
Agents crash, compact, and restart from zero. Everything they learned last session is gone.
Without persistent state, every session starts blind — no memory of what broke last time, what patterns worked, or what's already been tried.
No consistency
Every session is ad-hoc. Quality depends on luck, not process. The agent might check tests, might not.
There's no routine. One session the agent writes tests first, the next it skips them entirely. You can't build a reliable workflow on random behavior.
No improvement
Same mistakes, same blind spots, no trends. No data to tell you if things are getting better or worse.
Without structured output, there's nothing to measure. You can't identify patterns, track progress, or course-correct — sprint 50 looks exactly like sprint 1.
No coordination
Multiple agents, multiple repos, zero visibility. No shared learnings. The same bug gets hit independently in three places.
Without cross-agent coordination and org-level metrics, teams can't tell if their AI workflows are converging or diverging.
How It Works
Your agent follows the same routine. Every sprint.
SLOPE adds structure through natural language prompts — no plugins, no integrations. Tell your agent what to do, and it produces structured data every time.
── Pre-Round Briefing ──────────────
── Hole 29 ────────────────────
"OAuth & API Refactor" │ Par 4 · Slope 1 (cross-package)
Handicap: 1.2 ↓ trending down
Bunker index: 2 known in auth module
Backlog: 4 tickets queued
Ready to start?
I've got a new sprint to work on, four tickets around OAuth and API refactoring. What should I know before I dive in?
── Pre-Shot Routine ──────────────
── Shot S29-1 ──────────────────
"Add OAuth provider" │ Short Iron
Ticket S29-1 claimed ✓
Club selected: Short Iron (standard single-package)
Known hazards in area: Bunker ×1 (nvm sourcing)
Target: OAuth callback endpoint working with tests
Provisional:
"If blocked after 2 commits, fall back to session-based auth"
Okay, I'd like to implement the Google and GitHub OAuth provider. How should I go about that?
── Pre-Shot Routine ──────────────
── Shot S30-2 ──────────────────
"Fix race condition in checkout" │ Short Iron
Ticket S30-2 claimed ✓
Club selected: Short Iron (targeted fix)
Known hazards in area: Water ×1 (async timing)
Related: S27-3 had similar miss pattern
Provisional:
"If root cause isn't clear after 30 min, add a lock and revisit"
Users are seeing duplicate orders when they double-click the checkout button. Can you dig into that and fix it?
── Workflow: sprint-standard ──────────────
Hole S31 │ "Auth & Permissions" │ Par 4
✓ Phase: pre_hole — briefing complete
✓ Phase: plan_review — plan approved (tier 1)
▸ Phase: per_ticket — ticket 2/4
○ Phase: post_hole
Completion: docs/plans/S31.md exists
Next step: implement Shot S31-2
Start sprint S31 using the standard workflow — plan it out first, then implement each ticket.
── Tournament Roadmap ──────────────
"V2 Launch" │ 4 holes · 14 shots
Hole 32: Auth & Permissions │ Par 4
Hole 33: Payment Integration │ Par 4 · depends on 32
Hole 34: Notifications │ Par 3 · parallel track
Hole 35: Launch Prep │ Par 3 · depends on 33, 34
Critical path: 32 → 33 → 35
Parallel track: 34 (can start after 32)
I need to plan out the V2 launch. We've got auth, payments, and notifications to build, and then a launch prep sprint. Can you map that out?
── Org Status ──────────────────────
3 repos tracked │ Org handicap: 0.3
api │ 24 sprints · handicap 0.2 · latest: par
web │ 18 sprints · handicap 0.4 · latest: bogey
mobile │ 12 sprints · handicap 0.3 · latest: birdie
Cross-repo issues: 2 shared patterns
#1 Token format assumptions (api, web)
#2 Missing index on polymorphic queries (api, mobile)
Show me the org status — how are all our repos doing?
── Post-Shot Scoring ──────────────
── Shot S29-1 Result ─────────────
Result: Fairway → Green ✅
Hazards hit: 0
Miss direction: None
Commits: 3 (within budget)
Status: On track — 1 of 4 tickets complete
Alright, the OAuth provider is working and tests are passing. How'd that go?
── Post-Hole Scorecard ──────────────
── Hole 29 ────────────────────
"OAuth & API Refactor" │ Par 4
| # | Shot | Club | Result |
|---|---|---|---|
| S29-1 | OAuth provider | Short Iron | ✅ Green |
| S29-2 | Token refresh | Wedge | ⭐ Hole! |
| S29-3 | API rate limiter | Long Iron | ✅ Fairway |
| S29-4 | Integration tests | Putter | ⚠ Fairway |
Score: 4 (Par) │ Handicap: 1.1 ↓
FW: 100% · GIR: 75% · Putts: 0.25
Hole complete. What next?
Okay I think that's everything done. How'd the whole sprint go, and what should I focus on next?
One Framework, Six Languages
Same engine. Different vocabulary.
The scoring engine doesn't change — only the language does. Pick the metaphor your team already understands.
| Concept | Metaphor Term |
|---|---|
| Sprint | Hole |
| Ticket | Shot |
| Scorecard | Scorecard |
| Performance Card | Handicap Card |
| Briefing | Pre-round Briefing |
| Perfect Score | Hole-in-one |
| On Target | Par |
| Review | 19th Hole |
| High-risk approach | Driver |
| Precision approach | Wedge |
| Known gotcha | Bunker |
| Breaking change | Water |
| Scope creep | Out of Bounds |
It Works
Consistent routines compound.
Across 69 sprints on the SLOPE monorepo. Estimation accuracy = par vs actual score. Delivery accuracy = tickets completed vs planned.
Estimation Accuracy
How often sprint scoping is on target
Delivery Accuracy
How often tickets land as planned
Performance Index
Overall rating (lower is better)
Sprint 12
The same CSS z-index bug appeared for the third time. SLOPE flagged it as a recurring hazard. It never appeared again.
Retrospectives with data instead of feelings.
Every metric on this page comes from the reference implementation — 69 sprints of real work, scored with the same framework you install.
View the scorecards →Quick Start
Three commands. Your agent remembers everything.
Install SLOPE, initialize your project, and connect your agent. Zero lock-in — SLOPE writes standard JSON scorecards to your repo. Uninstall anytime, your data stays.
npm install -g @slope-dev/slope slope init Initialize
Set up SLOPE in your project. Pick a metaphor. Configure guards.
slope hook add claude-code Connect your agent
Install guard hooks so SLOPE can inject context, warnings, and blocks in real time.
slope briefing Start your first sprint
Your first scorecard in ~15 minutes. Recurring patterns flagged by sprint 3. The workflow handles the rest.
Alternative: paste this prompt into your agent No CLI needed
If you prefer natural language, paste this into your AI agent's chat:
Works with any agent that has SLOPE's MCP tools or CLI access.
Pro Tips
Get more from every hole.
Workflow habits that separate birdies from bogeys.
Pre-round briefing before code
Get your hazard index and known gotchas before writing a single line.
slope briefing Declare your approach
Pick your club so the scorecard captures complexity, not just outcome.
slope claim Commit early, push often
The last push is your recovery point. Everything since is at risk.
Check miss patterns
A bogey from going Long is different from going Left. Directional data matters.
slope card Quick Reference
Scores, clubs, and hazards at a glance
Scoring
Clubs
Hazards
Make your next sprint the last one your agent flies blind.
Every sprint without structured feedback is a sprint where the same mistakes repeat. The compounding starts when you do.
Install and run
npm install -g @slope-dev/slope First scorecard in ~15 minutes
Not ready to install?
Walk through a full sprint lifecycle in 2 minutes — no install needed.
Try the guideRead the framework
Scoring engine, workflow system, guards, and the complete methodology.
Full reference@slope-dev/slope
SLOPE is an open-source sprint scoring framework for AI agent teams.
Built with Astro, Tailwind, and GSAP. Live stats from the reference implementation.