For developers shipping with AI agents

Stop your AI agent from making the same mistakes twice.

Your AI agent forgets everything between sessions. SLOPE gives it a structured routine — so every sprint builds on the last one. Scores what happened, tracks trends over time, and blocks known mistakes before they recur. Works with Claude Code, Cursor, Windsurf, and 5 more.

Battle-tested across 69 sprints building SLOPE itself — every scorecard is public.
See how it works

AI Agents Are Powerful. But Chaotic.

Without structure, session 50 looks exactly like session 1. SLOPE is designed around the three constraints that make AI agents unreliable at scale.

Context loss

Agents crash, compact, and restart from zero. Everything they learned last session is gone.

Without persistent state, every session starts blind — no memory of what broke last time, what patterns worked, or what's already been tried.

No consistency

Every session is ad-hoc. Quality depends on luck, not process. The agent might check tests, might not.

There's no routine. One session the agent writes tests first, the next it skips them entirely. You can't build a reliable workflow on random behavior.

No improvement

Same mistakes, same blind spots, no trends. No data to tell you if things are getting better or worse.

Without structured output, there's nothing to measure. You can't identify patterns, track progress, or course-correct — sprint 50 looks exactly like sprint 1.

No coordination

Multiple agents, multiple repos, zero visibility. No shared learnings. The same bug gets hit independently in three places.

Without cross-agent coordination and org-level metrics, teams can't tell if their AI workflows are converging or diverging.

How It Works

Your agent follows the same routine. Every sprint.

SLOPE adds structure through natural language prompts — no plugins, no integrations. Tell your agent what to do, and it produces structured data every time.

claude — slope-project

── Pre-Round Briefing ──────────────

── Hole 29 ────────────────────

"OAuth & API Refactor" │ Par 4 · Slope 1 (cross-package)

Handicap: 1.2 ↓ trending down

Bunker index: 2 known in auth module

Backlog: 4 tickets queued

Ready to start?

Claim first ticket
Review backlog
Adjust par

I've got a new sprint to work on, four tickets around OAuth and API refactoring. What should I know before I dive in?

One Framework, Six Languages

Same engine. Different vocabulary.

The scoring engine doesn't change — only the language does. Pick the metaphor your team already understands.

Scorecard Hole 28
Par 4 | Score: 4 (Par)
1 Setup & config
Wedge Green
2 Core feature implementation
Short Iron Fairway
3 Integration + edge cases
Long Iron Bunker
4 Tests & documentation
Putter In the Hole
Fairways: 4/4 GIR: 3/4 Putts: 1 Handicap: 1.2
Concept Metaphor Term
SprintHole
TicketShot
ScorecardScorecard
Performance CardHandicap Card
BriefingPre-round Briefing
Perfect ScoreHole-in-one
On TargetPar
Review19th Hole
High-risk approachDriver
Precision approachWedge
Known gotchaBunker
Breaking changeWater
Scope creepOut of Bounds

It Works

Consistent routines compound.

Across 69 sprints on the SLOPE monorepo. Estimation accuracy = par vs actual score. Delivery accuracy = tickets completed vs planned.

94 %

Estimation Accuracy

How often sprint scoping is on target

75 %

Delivery Accuracy

How often tickets land as planned

0.2

Performance Index

Overall rating (lower is better)

Sprint 12

The same CSS z-index bug appeared for the third time. SLOPE flagged it as a recurring hazard. It never appeared again.

Retrospectives with data instead of feelings.

Every metric on this page comes from the reference implementation — 69 sprints of real work, scored with the same framework you install.

View the scorecards →

Quick Start

Three commands. Your agent remembers everything.

Install SLOPE, initialize your project, and connect your agent. Zero lock-in — SLOPE writes standard JSON scorecards to your repo. Uninstall anytime, your data stays.

$ npm install -g @slope-dev/slope
1
slope init

Initialize

Set up SLOPE in your project. Pick a metaphor. Configure guards.

2
slope hook add claude-code

Connect your agent

Install guard hooks so SLOPE can inject context, warnings, and blocks in real time.

3
slope briefing

Start your first sprint

Your first scorecard in ~15 minutes. Recurring patterns flagged by sprint 3. The workflow handles the rest.

Alternative: paste this prompt into your agent No CLI needed

If you prefer natural language, paste this into your AI agent's chat:

"Initialize SLOPE in this project. Set up the config, enable guard hooks for [your platform], and give me a pre-sprint briefing."

Works with any agent that has SLOPE's MCP tools or CLI access.

Pro Tips

Get more from every hole.

Workflow habits that separate birdies from bogeys.

Pre-round briefing before code

Get your hazard index and known gotchas before writing a single line.

slope briefing

Declare your approach

Pick your club so the scorecard captures complexity, not just outcome.

slope claim

Commit early, push often

The last push is your recovery point. Everything since is at risk.

Check miss patterns

A bogey from going Long is different from going Left. Directional data matters.

slope card

Quick Reference

Scores, clubs, and hazards at a glance

Full cheat sheet →

Scoring

Eagle-2
Birdie-1
Par0
Bogey+1
Double+2

Clubs

DriverHigh risk
Long IronMulti-pkg
Short IronStandard
WedgeSmall fix
PutterTrivial

Hazards

BunkerKnown gotcha
WaterBreaking change
OBScope creep
RoughTech debt
TreesBlocker

Make your next sprint the last one your agent flies blind.

Every sprint without structured feedback is a sprint where the same mistakes repeat. The compounding starts when you do.

Install and run

$ npm install -g @slope-dev/slope

First scorecard in ~15 minutes

Not ready to install?

Walk through a full sprint lifecycle in 2 minutes — no install needed.

Try the guide

Read the framework

Scoring engine, workflow system, guards, and the complete methodology.

Full reference

@slope-dev/slope

SLOPE is an open-source sprint scoring framework for AI agent teams.

Built with Astro, Tailwind, and GSAP. Live stats from the reference implementation.