Full Reference
The SLOPE Framework
A comprehensive framework that maps sprint workflows to Golf terminology. Goes beyond naming — the Golf metaphor introduces structural improvements: directional miss tracking, handicap trending, club selection, difficulty rating, conditions tracking, and a multi-level routine hierarchy.
Switch the vocabulary — all highlighted terms across this page will update.
Installation & Usage
Get started with CLI commands or natural language prompts. Same flow, your choice.
npm install -g @slope-dev/slope slope init Initialize
Set up SLOPE in your project. Pick a metaphor. Configure guards.
slope briefing Get briefed
Get a pre-sprint briefing: risk index, performance snapshot, known issues.
slope card Check your card
Check your performance card — rolling stats across multiple windows.
slope review Review the sprint
Generate a sprint review with ticket-by-ticket analysis and improvement recommendations.
Agents & Environments
SLOPE is a framework your AI agent follows — not a plugin. It integrates through three layers, each adding more automation. Any agent that can read files and run commands can use SLOPE.
Integration Layers
| Layer | What it does | Environments |
|---|---|---|
| CLI Tools | Agent runs slope commands (briefing, review, card, validate) | All |
| Rules Files | Agent reads framework instructions at session start — routines, checklists, commit discipline | Claude Code, Cursor, OpenCode |
| Hooks & Guards | Auto-fire on tool-use events, real-time warnings before mistakes land | Claude Code |
Environment Setup
- · CLAUDE.md in project root — agent reads at session start
- · .claude/rules/ for checklists, commit discipline, sprint routines
- · .mcp.json for MCP tool access (slope search, slope execute)
- · Hooks: guards fire on Bash, Edit, Write tool events
- · .cursorrules file — system instructions (same content as CLAUDE.md)
- · Agent runs slope CLI commands in terminal
- · No hooks — agent must self-enforce routines
- · AGENTS.md or project-level config
- · Agent runs slope CLI commands
- · No hooks — agent must self-enforce routines
Single Agent vs. Multi-Agent
Default mode. One agent, one sprint. Scorecards track that agent's performance over time — handicap, miss patterns, hazard frequency.
Assign roles (backend, frontend, architect, devops, generalist), claim tickets, per-agent stats roll up to team handicap.
See Team & Multi-Agent for setup details, built-in roles, and coordination features.
Read the Green — Why SLOPE is framework-first, not plugin-first
SLOPE writes everything to disk — JSON scorecards, config files, hazard maps. Any agent that can read files and run commands can use it. There's no API key, no cloud service, no vendor lock-in.
Hooks are the deepest integration but not required — they just automate enforcement. Without hooks, the agent follows the same routines manually via rules files and CLI commands. The output is identical.
This means SLOPE works with future tools too, not just today's editors. If a new AI coding environment launches tomorrow and it can run shell commands, SLOPE works on day one.
Tips & Tricks
Practical workflow advice to get more from every sprint.
Declare your approach before coding
Pick your club — Driver, Iron, Wedge, or Putter — so the scorecard captures complexity, not just outcome.
slope claim Run a briefing at sprint start
Get your hazard index, performance snapshot, and known gotchas before writing a line of code.
slope briefing Commit early, push often
The last push is your recovery point. Everything since the last push is lost on crash or context loss.
Keep the codebase map current
The map saves tokens and prevents stale assumptions. Regenerate after adding new files or commands.
slope map Use provisionals for risky shots
Declare a fallback before swinging. "If this doesn't work in 2 shots, play X instead."
Check miss patterns, not just the score
A bogey from going Long is different from going Left. Directional data drives targeted improvement.
slope card Validate your scorecard before merging
Catch schema errors, missing fields, and scoring inconsistencies before they hit the repo.
slope validate Use gimmes for trivial tickets
Obvious one-line fixes don't need full ceremony. Mark them as gimmes and save your focus for real shots.
Cheat Sheet
Quick reference for scores, clubs, hazards, commands, and miss directions.
Score Labels
Shot Types (Clubs)
Hazards
Key Commands
Miss Directions
The Big Picture
Every sprint flows through the same lifecycle. Here's the full picture — from the Pre-Round briefing to the 19th Hole reflection.
Sprint Lifecycle
Briefing + Par + Slope
Write plan → review → revise
Workflow gate — must complete before implementation
For each ticket
Read the Green + select club
Code + test + push
Log + recovery shot
Score + log result
Scorecard + retro
Reflect + improve
Read the Green — Why a fixed lifecycle matters for AI agents
AI coding agents lose context. They crash, compact, and restart. Without a fixed lifecycle, every session starts from zero — the agent has no idea what happened in the last 50 sprints.
SLOPE's lifecycle is designed around this constraint. The workflow engine enforces this lifecycle automatically — sprint-standard is the default workflow. Every artifact is written to disk at specific checkpoints: the plan file after plan review, the scorecard after post-hole, the handicap after the 19th hole. When context compacts, the agent reads these artifacts and picks up exactly where it left off.
The "plan review" gate ensures you have a reviewed approach before implementation starts. The "for each ticket" loop is where most work happens. But the pre-round and post-hole routines are what make the data compound. Without them, you just have a git log. With them, you have a trajectory.
Structure Mapping
| Golf | Development | Scope |
|---|---|---|
| Tournament | Project | The whole thing, all phases |
| Round | Phase | Multi-sprint milestone (6 rounds total) |
| Hole | Sprint | Self-contained, has a par |
| Shot | Ticket | One deliberate action |
| Stroke | Commit | The atomic unit of work |
Read the Green — Why these levels matter
The five levels aren't arbitrary — they map to different feedback loop frequencies. A stroke (commit) gives you immediate feedback: does it compile? A shot (ticket) gives you end-of-task feedback: does the feature work? A hole (sprint) gives you multi-task feedback: did the team improve?
The round (phase) and tournament (project) levels are where strategic patterns emerge. "We always struggle at the mobile-orchestrator boundary" is a round-level insight. "Our scoping gets better every phase" is a tournament-level trend.
Most dev frameworks only operate at one or two levels. SLOPE gives you vocabulary and measurement at all five, so you can reason about performance at the right granularity.
Par System
Par is determined by ticket count. It sets the expected difficulty before the sprint begins.
Course Rating / Slope
Par says how many shots. Slope says how hard those shots are. Each factor adds +1.
Read the Green — How par + slope interact with scoring
A Par 4, Slope 3 sprint is meaningfully harder than a Par 4, Slope 0. Both have 3-4 tickets, but the slope factors compound difficulty in ways that ticket count alone doesn't capture.
The handicap system accounts for this: handicap = avg(score - par) over a rolling window. A bogey on a Slope 3 sprint is more forgivable than a bogey on Slope 0. The framework doesn't penalize you for taking on hard work.
Example: Sprint 22 — Par 4, Slope 2 (cross-package + new area)
Score: 4 (Par) — clean execution despite the slope factors.
The handicap computation weights slope difficulty appropriately.
Sprint Types
Not all sprints are the same. SLOPE classifies 8 sprint types, each with different expectations and scoring context.
feature Feature New functionality — the standard sprint type
When to use: Building new features or user-facing changes
feedback Feedback Refinement based on previous sprint learnings
When to use: When miss patterns suggest you need to adjust approach
infra Infrastructure Infrastructure, CI/CD, deployment work
When to use: Setting up or maintaining build/deploy pipeline
bugfix Bug Fix Dedicated bug-fixing sprint
When to use: Clearing a backlog of known issues
research Research Investigation and prototyping
When to use: Exploring unfamiliar territory before committing to an approach
flow Flow Mapping user-facing workflows to code paths
When to use: Documenting and validating user journeys through the codebase
test-coverage Test Coverage Increasing test coverage in weak areas
When to use: When GIR% or fairway% is low and you need safety nets
audit Course Review Code quality review — DRY, clean, modern, concise
When to use: When codebase has accumulated tech debt, inconsistent patterns, or outdated practices
Read the Green — Why sprint type matters for scoring
Sprint type affects how you interpret the score. A research sprint that scores bogey isn't a failure — exploration takes more strokes by nature. A bugfix sprint that scores bogey might indicate deeper issues, since bug fixes should be precise.
The handicap system tracks sprint types to normalize scoring. If your handicap improves but you're only doing feedback and bugfix sprints, the improvement is less meaningful than if you're consistently improving on feature sprints.
Sprint type also drives the training recommendation engine: "long" misses on feature sprints recommend research training. "Right" misses on infra sprints recommend feedback training.
Workflow Engine
Workflows provide structured sprint execution with phase gates, completion conditions, and plan review. Every sprint runs through a workflow — sprint-standard is the default.
Built-in Workflows
sprint-standard DEFAULT Full lifecycle: briefing → plan review → per-ticket implementation → scorecard → review. Includes plan review gates with tier-based approval.
sprint-autonomous Designed for slope loop. Minimal gates, auto-commit, per-ticket timeouts. Optimized for autonomous execution.
sprint-lightweight Minimal: implement, validate, done. For small fixes or tasks that don't need the full lifecycle.
Key Concepts
Ordered groups of steps. A sprint moves through phases sequentially: pre_hole, plan_review, per_ticket, post_hole.
command (run CLI), validation (check conditions), agent_input (collect fields), agent_work (implement).
files_exist with glob patterns enforce that artifacts (plan files, reviews) exist before a step can advance.
When a sprint starts, the workflow YAML is snapshotted. Mid-execution changes to the definition don't affect the running sprint.
Commands
| Command | Purpose |
|---|---|
| slope sprint run <id> --workflow=<name> | Execute a sprint through a workflow |
| slope sprint pause <id> | Pause a running workflow |
| slope sprint resume <id> | Resume a paused workflow |
| slope sprint skip <id> --step=<s> | Skip a blocking step with reason (escape hatch) |
| slope workflow validate <name> | Validate a workflow definition |
| slope workflow list | List available workflows |
| slope workflow show <name> | Display a workflow's phases and steps |
Read the Green — Workflow YAML structure
Workflows are defined as YAML files in .slope/workflows/. A workflow has phases, each containing steps:
name: sprint-standard
phases:
- name: pre_hole
steps:
- name: briefing
type: command
command: slope briefing
- name: plan_review
steps:
- name: write_plan
type: agent_work
completion_conditions:
files_exist: ["docs/plans/S*.md"]
- name: review_plan
type: agent_input
required_fields: [review_tier, review_complete]
- name: per_ticket
repeat_for: tickets
steps:
- name: implement
type: agent_work
- name: validate
type: validation
- name: post_hole
steps:
- name: scorecard
type: command
command: slope review The repeat_for: tickets directive runs the phase once per claimed ticket. Completion conditions enforce that required files exist before advancing. Set defaultWorkflow: 'sprint-standard' in your config to use workflows by default.
Club Selection
Declared before each shot. This is your approach commitment — choosing the right club prevents over-engineering and under-scoping.
| Club | Approach | When |
|---|---|---|
| Driver | New infrastructure, big architectural move | Starting something from scratch |
| Long iron | Standard feature implementation | Well-understood pattern, moderate scope |
| Short iron | Targeted implementation | Precise scope, clear target |
| Wedge | Precision fix, surgical refactor | Small area, high accuracy |
| Putter | Polish, final testing, cleanup | Last mile, closing it out |
Read the Green — The psychology of declaring your approach
Club selection isn't just metadata. It's a forcing function that makes you think about complexity before writing code. When you pick up a driver, you're saying: "this is risky, this is new, I'm swinging hard." When you pick up a putter, you're saying: "this is close to done, precision matters more than power."
Over time, club selection data reveals patterns: "We always reach for drivers on OAuth work, but irons would be safer." This is exactly how experienced players optimize — they know which clubs they hit consistently and which ones introduce variance.
The provisional is particularly powerful for AI agents. When an agent declares a provisional ("if this approach fails in 2 commits, switch to X"), it prevents the common failure mode of sinking 45 minutes into a dead-end approach.
Hazards
Obstacles that add strokes to your score. Each hazard type maps to a specific development problem — and a specific recovery pattern.
Recovery costs an extra shot but you're still in play. The gotcha was documented — you just didn't check.
Penalty stroke + re-tee from safe position. Irreversible damage — you need to revert and approach from a safer angle.
Stroke + distance. You left the sprint boundary entirely. Go back to where you were and re-approach.
No penalty, just slower going. The code works but it's harder to navigate. Budget extra time.
Can't go direct. Must punch out sideways — unblock the dependency first, then approach the target.
Read the Green — From bunker locations to a hazard map
The real power isn't in logging individual hazards — it's in the bunker location map that accumulates over time. After 50 sprints, you know exactly where the bunkers are: "the OAuth module always has nvm sourcing issues," "mobile-orchestrator boundary misses right on integration."
This map gets baked into the pre-shot routine. Before starting a ticket, the agent checks: "are there known bunkers in this area?" If yes, it budgets an extra shot and selects a safer club. This is how the best players play — they don't just swing and hope. They study the course.
In SLOPE, this is implemented as common-issues.json — a growing database of gotchas indexed by code area, severity, and recurrence. The pre-round briefing surfaces the top hazards for the sprint's work area automatically.
Conditions
External factors tracked per sprint. Not controlled by the player — but accounted for in the scorecard.
Read the Green — Why conditions aren't excuses
Conditions aren't excuses — they're context. A bogey in wind (context compaction) is different from a bogey on a calm day. The scorecard captures both the score and the conditions, so when you review your handicap trend, you can separate "bad play" from "hard conditions."
Pin position is especially useful for AI agents. A ticket with loose acceptance criteria ("make the UI better") is a center pin — easy to land near. A ticket with precise requirements ("the animation must be exactly 200ms with this easing curve") is a tucked pin — requires surgical precision. Knowing the pin position before you swing changes your club selection.
Routines
Consistency prevents unforced errors. Same steps, same order, every time.
Pre-Shot Routine — Before Each Ticket
- 1Review the codebase map — Check the map for the area you're about to touch
- 2Check conditions — Any gotchas for this area? External blockers?
- 3Select your club — Declare your approach and record it
- 4Pick your target — Read the spec section, know exactly what "on the green" looks like
- 5Visualize the shot — If the area has prior miss data, account for it
- 6Declare a provisional — If risky, name the fallback before swinging
Post-Shot Routine — After Each Ticket
- 1Score the shot — Did it land where you aimed? Fairway / green / miss?
- 2Record miss direction — Long (over-engineered), Short (under-scoped), Left (wrong approach), Right (wrong execution)
- 3Log hazards hit — Any bunkers (gotchas), water (breaking changes), OB (scope creep)?
- 4Commit and push — The stroke is recorded — the last push is the recovery point
- 5Update the lie — Update sprint file with ticket status and context checkpoint
Miss Direction Key
Post-Hole Routine — After Each Sprint
- 1Tally the scorecard — Score vs par, shots taken, hazards, misses, conditions
- 2Check for reviews — Run
slope review recommendto see which review types apply - 3Aggregate shot stats — Fairways hit %, GIR %, putts per hole, penalty count
- 4Amend with findings — If reviews found issues, run
slope review amendto inject hazards and recalculate - 5Update the codebase map — Record anything learned for future reference
- 6Distill lessons — Update bunker locations for future players
- 7Check your handicap — Rolling trend: are you improving or regressing?
- 8File the card — Retro JSON + spec-status + completed sprint JSON
19th Hole — Informal Reflection
The bar after the round. Honest, unstructured, human.
- 1 How did that feel? — Not stats, not process. Gut check.
- 2 What would you tell the next player? — One piece of advice, no jargon.
- 3 What surprised you? — The thing you didn't expect, good or bad.
- 4 What are you excited about next? — Where the energy is for the next round.
Read the Green — Why routine beats talent
The #1 insight from dozens of sprints: routine beats talent. An agent that follows the pre-shot routine every time will outperform an agent that's "smarter" but inconsistent. The best players know this — they don't skip their routine even when they're playing well.
The 19th Hole is the most underrated part. It captures what metrics can't: momentum, energy, intuition. After many sprints of 19th Hole notes, patterns emerge that no stat can show: "infrastructure sprints are energizing," "cross-package work is draining."
For AI agents specifically, the routines solve the context compaction problem. When an agent's context window fills up and old messages get compressed, the routine artifacts on disk preserve everything. The next session reads the sprint file and the scorecard, and it's as if no context was lost.
Scoring & Handicap
Every sprint produces a scorecard. Rolling stats produce a handicap. The handicap tells you if the system is improving.
Scoring Computation
club, result, hazards
auto-computes stats
FW% · GIR% · putts
penalties · misses
score_label
Review Amendment (optional)
type, severity, ticket
findings → hazards
recalculated via
buildScorecard()
last 5 | last 10 | all
| Stat | Last 5 | Last 10 | All-time |
|---|---|---|---|
| Handicap | +1.0 | +1.2 | +1.5 |
| Fairways | 90% | 86% | 80% |
| GIR | 82% | 78% | 74% |
| Avg putts | 1.3 | 1.4 | 1.5 |
| Miss pattern | Slightly long | Long | Long |
Read the Green — What the handicap actually tells you
The handicap isn't a judgment — it's a prediction. A handicap of 1.2 means "this system averages 1.2 strokes over par." That's not bad or good in isolation — it's useful because it tells you what to expect from the next sprint.
The three windows (last 5, last 10, all-time) show different things:
- Last 5 — Are you in form right now? Recent momentum.
- Last 10 — Is this a trend or a fluke? Smooths out variance.
- All-time — Your baseline. Where you started vs where you are.
When last-5 is significantly better than all-time, you're improving. When it's worse, something changed — check conditions, check hazards, check if you're tackling harder sprints (higher slope).
The miss pattern is arguably more actionable than the handicap itself. "We consistently miss long" means the team over-engineers. "We miss right" means the approach is correct but execution/integration fails. This directly maps to training drills: research sprints for "long" misses, feedback sprints for "right" misses.
Analysis & Advisor
SLOPE doesn't just record scores — it analyzes patterns and recommends approaches.
Miss pattern breakdown: dominant direction, systemic issues, miss rate %
Stats by sprint type, club, and par — where you're strong vs weak
recommendClub() suggests approach + confidence from history
classifyShot() auto-scores tickets from execution traces
classifyShotFromSignals() uses CI test results + PR metadata
generateTrainingPlan() produces targeted drills from handicap + dispersion
Read the Green — From data to decisions
The analysis layer is what turns SLOPE from a logging tool into a coaching system. Raw scorecards give you numbers; the advisor gives you actionable recommendations.
recommendClub() looks at your history with similar tickets — same par, same area, same conditions — and suggests the approach most likely to land on the green. Confidence drops when you haven't faced this combination before.
Auto-scoring via classifyShotFromSignals() means agents don't need to self-assess. CI pass/fail, PR size, review comments, and time-to-merge all feed into an objective result classification. This removes the biggest bias in self-scored systems.
Training System
When miss patterns emerge, SLOPE recommends targeted training sprints to address weaknesses.
Training Types
| Sprint Type | Recommended Training | Focus |
|---|---|---|
| research | Driving Range | Long-range accuracy — scoping and estimation |
| feedback | Chipping Practice | Short game — precision and refinement |
| test-coverage | Putting Practice | Consistency — repeatable quality |
Nutrition Checklist
Beyond mechanics, SLOPE tracks environmental health factors that affect performance. These are checked in the pre-round briefing.
Read the Green — Why training and nutrition matter
Training is what turns scorecard data into improvement. Without it, you're just tracking stats. With it, each sprint's learnings feed back into specific practice areas.
The nutrition checklist is deceptively important for AI agents. "Hydration" maps to keeping context fresh, "diet" to clean inputs, "recovery" to proper session boundaries. An agent running without these checks is like an athlete who never stretches — performance degrades silently.
Special Plays
Obvious fix, no architect review needed
Approach was fundamentally wrong
"If this doesn't work in 2 shots, play X instead"
Could go for it but risk isn't worth it
Multiple agents take shots, use the best
Example Scorecard
Here's what a real SLOPE scorecard looks like — from an actual sprint.
S26-1: Roadmap validation — Clean implementation, schema + validator.
S26-2: Critical path computation — On target, dependency graph works.
S26-3: Parallel opportunities finder — Clean, followed existing patterns.
S26-4: CLI commands (roadmap show/validate/review) — Straightforward wiring.
Reports & Dashboard
Generate static reports or launch an interactive dashboard from your scorecard data.
Markdown output with handicap trend, dispersion chart, area performance, nutrition, and sprint table.
slope report Sprint timeline, miss heatmap, area hazard overlay, and drill-down into individual scorecards.
slope dashboard Read the Green — Static vs interactive
slope report generates a Markdown file you can commit alongside your scorecards — perfect for PR descriptions or team reviews. It's a snapshot: handicap trend, top hazards, area breakdown.
slope dashboard launches a local web UI for exploratory analysis. Click into any sprint to see individual shots, filter by club or hazard type, and spot patterns that static reports miss.
CLI Commands
48 commands across 5 categories — 21 with subcommands.
Sprint Lifecycle
slope init Initialize .slope/ directory
Flags
--metaphor=<id> Set metaphor theme (golf, gaming, dnd, etc.) --interactive Rich interactive setup wizard slope help Show detailed per-command usage
Flags
<command> Command name to show details for slope quickstart Interactive tutorial for new users
No additional options.
slope doctor Check repo health and auto-fix issues
Flags
--fix Auto-fix detected issues slope version Show version or bump with automated PR workflow 2 sub
slope version bump Bump version with automated PR workflow <version> Explicit version (e.g. 1.28.0) --patch Patch bump (x.y.Z+1) — bug fixes only --major Major bump (X+1.0.0) — breaking changes --dry-run Preview changes without committing slope version recommend Analyze commits and recommend version tier slope session Manage live sessions 4 sub
slope session start Start a new session --role=<role> Session role (primary, secondary, observer) --ide=<id> IDE identifier (claude-code, cursor, etc.) --branch=<name> Git branch name --swarm=<id> Join an existing swarm --agent-role=<role> Role within the swarm slope session end End active session --session-id=<id> Specific session to end slope session heartbeat Send session heartbeat --session-id=<id> Specific session to heartbeat slope session list List active sessions --swarm=<id> Filter by swarm slope claim Claim a ticket or area for the sprint
Flags
--target=<path> File or directory to claim --ticket=<key> Ticket key (e.g. S48-1) --force Override conflicting claims slope release Release a claim by ID or target
Flags
--id=<id> Claim ID to release --target=<path> Release claim by target path slope status Show sprint course status and conflicts
Flags
--json Output as JSON slope next Show next sprint number (auto-detect)
No additional options.
slope sprint Manage sprint lifecycle state and gates 4 sub
slope sprint start Start a new sprint --number=<N> Sprint number (required) --phase=<phase> Initial phase (default: planning) slope sprint gate Mark a gate as complete <name> Gate name to complete slope sprint status Show current sprint state and gates slope sprint reset Reset sprint state Scoring & Review
slope card Display handicap card
Flags
--metaphor=<id> Display theme override --player=<name> Filter to a specific player --swarm Show swarm/multi-agent handicap --team Show team handicap card slope validate Validate scorecard(s)
Flags
<path> Scorecard JSON file to validate slope review Format sprint review or manage review state 8 sub
Global Flags
--metaphor=<id> Display theme override <path> Scorecard file to review (default: latest) slope review start Start a plan review --tier=<tier> Review tier (skip, light, standard, deep) slope review round Record completion of a review round slope review status Show current review state slope review reset Reset review state slope review recommend Check which review types apply to the sprint slope review findings Manage review findings add Add a finding (--type, --ticket, --severity, --description) list List recorded findings clear Clear all findings slope review amend Inject review findings as hazards into scorecard slope review run Generate subagent review prompts from PR diff --pr=<N> PR number (default: current branch) --type=<type> Review type: architect, code, or both (default: both) --sprint=<N> Sprint number for findings --json Output as JSON for programmatic use slope auto-card Generate scorecard from git + CI signals
Flags
--sprint=<N> Sprint number (required) --since=<date> Start date for git log --branch=<ref> Git branch to analyze --theme=<text> Sprint theme description --player=<name> Player name for scorecard --test-output=<file> Path to test output for CI signal parsing --pr=<number> PR number for PR signal parsing --swarm=<id> Swarm ID for multi-agent scorecard --dry-run Preview without writing slope classify Classify a shot from execution trace
Flags
--scope=<files> Comma-separated file scope --modified=<files> Comma-separated modified files --tests=<result> Test result (pass, fail, partial) --reverts=<N> Number of reverts --hazards=<N> Number of hazards encountered slope tournament Build tournament review from sprints
Flags
--id=<id> Tournament identifier --name=<name> Tournament display name --sprints=<N-M> Sprint range (e.g. 1-10) --output=<path> Output file path Analysis & Reporting
slope briefing Pre-round briefing with hazards and nutrition
Flags
--categories=<list> Filter by issue categories (comma-separated) --keywords=<list> Filter by keywords (comma-separated) --sprint=<N> Sprint number --role=<id> Filter by role --player=<name> Filter to a specific player --personal Show personal stats only --no-training Skip training recommendations --compact Shorter output for session hooks slope plan Pre-shot advisor (club + training + hazards)
Flags
--complexity=<level> Complexity (trivial, small, medium, large) --slope-factors=<list> Comma-separated slope factors --areas=<list> Comma-separated code areas --sprint=<N> Sprint number for context slope report Generate HTML performance report
Flags
--html Generate HTML report --output=<path> Output file path slope dashboard Live local performance dashboard
Flags
--port=<N> HTTP port (default: 3000) --no-open Don't auto-open browser --refresh=<N> Auto-refresh interval in seconds (0=disable) --metaphor=<id> Display theme override --player=<name> Filter to a specific player slope standup Generate or ingest standup report
Flags
--session=<id> Session ID for standup generation --role=<id> Agent role filter --sprint=<N> Sprint number --ingest=<path> Ingest standup from file (or stdin with --ingest) --aggregate Aggregate team standups --json Output as JSON slope analyze Scan repo and generate profile
Flags
--analyzers=<list> Run specific analyzers (comma-separated: stack, git, etc.) --json Output full profile as JSON slope org Multi-repo aggregation and org-level metrics 3 sub
slope org init Create .slope/org.json template slope org status Show all repos with handicaps and sprint counts --json Output as JSON slope org issues Show recurring patterns shared across repos --json Output as JSON Tooling & Config
slope hook Manage lifecycle hooks 3 sub
slope hook add Install guard hooks --level=<level> Hook level (full, scoring) --harness=<id> Target harness (auto-detect or specify) slope hook remove Remove installed hooks slope hook list Show installed hooks --available Show full catalog of available hooks slope guard Run guard handler or manage guard activation 7 sub
slope guard <name> Run a guard (reads hook JSON from stdin) slope guard list Show all available guards slope guard status Show per-harness guard installation state slope guard recommend Show missing guards with relevance to your workflow slope guard docs Show detailed guard documentation <name> Guard name (optional — shows all if omitted) slope guard enable Enable a disabled guard <name> Guard name to enable slope guard disable Disable a guard <name> Guard name to disable slope extract Extract events into SLOPE store
Flags
--file=<path> Event file to extract --session-id=<id> Session ID to tag events --sprint=<N> Sprint number slope distill Promote event patterns to common issues
Flags
--auto Auto-promote patterns above threshold --dry-run Preview without writing --sprint=<N> Filter to a specific sprint --threshold=<N> Minimum occurrence threshold slope map Generate/update codebase map
Flags
--check Check staleness (exit 1 if stale) --output=<path> Custom output path (default: CODEBASE.md) slope workflow Manage workflow definitions 3 sub
slope workflow validate Parse and validate a workflow definition slope workflow list List all available workflows (project + built-in) slope workflow show Pretty-print a workflow with phase/step tree slope flows Manage user flow definitions 3 sub
slope flows init Create .slope/flows.json with example template slope flows list List all flows with staleness indicators slope flows check Validate all flows (file existence, staleness); exit 1 if stale slope inspirations Track external OSS inspiration sources 3 sub
slope inspirations add Add an inspiration source --url=<url> Source URL (required) --project=<name> Project name (required) --idea="<text>" Idea extracted (repeatable, required) --id=<id> Override auto-derived ID slope inspirations list List tracked inspirations --status=<status> Filter by status (backlogged, planned, implemented, rejected) slope inspirations link Link inspiration to a sprint --id=<id> Inspiration ID (required) --sprint=<N> Sprint number (required) slope metaphor Manage metaphor display themes 3 sub
slope metaphor list Show all available metaphors slope metaphor set Set the active metaphor <id> Metaphor ID to activate slope metaphor show Show all terms for a metaphor <id> Metaphor ID to display slope plugin Manage custom plugins 2 sub
slope plugin list Show all plugins (built-in + custom) slope plugin validate Validate a plugin file <path> Plugin file path slope store Store diagnostics and management 2 sub
slope store status Show store type, schema version, and stats --json Output as JSON slope store backup Back up the store --output=<path> Backup output path slope escalate Escalate issues based on severity triggers
Flags
--reason=<text> Manual escalation reason --session-id=<id> Session ID context --swarm=<id> Auto-detect escalations in a swarm --sprint=<N> Sprint number slope transcript View session transcript data 3 sub
slope transcript list List available transcripts slope transcript show Show turn-by-turn summary <session-id> Session ID to display slope transcript stats Aggregate metrics <session-id> Session ID (optional, all if omitted) slope loop Autonomous sprint execution loop 10 sub
slope loop run Single sprint execution --sprint=<ID> Sprint ID to execute --dry-run Preview without executing slope loop continuous Multi-sprint loop --max=<N> Maximum sprints to run (default: 10) --pause=<S> Pause between sprints in seconds --staging Use staging branch --dry-run Preview without executing slope loop parallel Dual-sprint parallel execution via worktrees --dry-run Preview without executing slope loop status Show loop progress, next sprint, config --sprint=<ID> Show status for a specific sprint slope loop config Loop configuration management --show Display current config --set Set a config value (k=v) slope loop results Format/display sprint results --sprint=<ID> Show results for a specific sprint --json Output as JSON slope loop analyze Mine scorecards, generate backlog --regenerate Force regeneration slope loop models Model selection analytics --analyze Run model analysis --show Show current model config slope loop guide SKILL.md word count and hazard check --check Validate guide --synthesize Synthesize guide content slope loop clean Cleanup loop artifacts --results Clean result files --logs Clean log files --worktrees Clean git worktrees --all Clean everything slope worktree Manage git worktrees 1 sub
slope worktree cleanup Clean up stale worktrees (remove, delete branch, delete remote) --path=<path> Target a specific worktree --all Clean up all secondary worktrees --dry-run Preview without making changes slope index-cmd Semantic embedding index management
Flags
--full Full reindex (drop + rebuild) --status Show index stats --prune Remove embeddings for deleted files --json Output stats as JSON slope context Semantic context search for agents
Flags
<query> Free-text semantic search query --ticket=<key> Use ticket title as query --file=<path> Find files related to a given file --top=<N> Limit results (default: 5) --format=<fmt> Output format (paths, snippets, full) slope prep Generate execution plan for a ticket
Flags
<ticket-id> Ticket ID to prepare --json Output as JSON --lite Hazards + similar tickets only (no embedding required) --top=<N> Limit context results (default: 5) slope enrich Batch-enrich backlog with file context
Flags
<backlog-path> Path to backlog file --output=<path> Output path for enriched backlog --with-plans Include execution plans --top=<N> Limit context results per ticket (default: 5) slope stats Export stats JSON for slope-web live dashboard 1 sub
slope stats export Compute SlopeStats JSON from local scorecards + registries --pretty Pretty-print JSON output --stdout Write to stdout (default behavior) slope docs Generate documentation manifest and changelog 5 sub
slope docs generate Build manifest JSON from registries + git history --output=<path> Write manifest to path (default: .slope/docs.json) --pretty Pretty-print JSON output --incremental Skip changelog generation --stdout Write to stdout instead of file slope docs changelog Generate changelog from conventional commits --since=<version> Changelog since this version/tag --format=<fmt> Output format: markdown (default) or json slope docs check Compare saved manifest against current state (exit 1 on drift) --manifest=<path> Path to saved manifest (default: .slope/docs.json) slope docs validate Fetch remote manifest and compare against local (exit 1 on drift) --url=<url> Remote manifest URL (default: slope-web GitHub raw) slope docs sync Copy manifest to slope-web or target directory --target=<path> Target directory (default: adjacent slope-web repo) Planning & Roadmap
slope roadmap Strategic planning and roadmap tools 6 sub
slope roadmap validate Schema + dependency graph checks --path=<file> Roadmap file path slope roadmap review Automated architect review --path=<file> Roadmap file path slope roadmap status Current progress --path=<file> Roadmap file path --sprint=<N> Focus on specific sprint slope roadmap show Render summary (critical path, parallel tracks) --path=<file> Roadmap file path slope roadmap sync Sync scorecards into roadmap --path=<file> Roadmap file path --dry-run Preview without writing slope roadmap generate Generate from vision + backlog analysis --path=<file> Output roadmap file path slope vision Display project vision document 2 sub
Global Flags
--json Output as JSON slope vision create Create a new vision document --purpose=<text> Project purpose --priorities=<list> Comma-separated priorities slope vision update Update existing vision fields --purpose=<text> Updated purpose --priorities=<list> Updated priorities slope initiative Multi-sprint initiative orchestration 6 sub
slope initiative create Create a new initiative slope initiative status Show current initiative state slope initiative next Show next sprint in the initiative slope initiative advance Advance to the next phase slope initiative review Record a review gate result --sprint=<N> Sprint number --gate=<gate> Gate type (plan, pr) --reviewer=<type> Reviewer type --findings=<N> Number of findings slope initiative checklist Show review checklist Guard Framework
27 built-in guards that enforce discipline automatically via Claude Code hooks. Guards fire on tool use events and provide real-time warnings.
| Guard | Trigger | Purpose |
|---|---|---|
| explore | PreToolUse | Suggest checking codebase index before deep exploration |
| hazard | PreToolUse | Warn about known issues in file areas being edited |
| commit-nudge | PostToolUse | Nudge to commit/push after prolonged editing |
| scope-drift | PreToolUse | Warn when editing files outside claimed ticket scope |
| compaction | PreCompact | Extract events before context compaction |
| stop-check | Stop | Check for uncommitted/unpushed work before session end |
| subagent-gate | PreToolUse | Enforce model selection on Explore/Plan subagents |
| push-nudge | PostToolUse | Nudge to push after git commits when unpushed count or time is high |
| workflow-gate | PreToolUse | Block ExitPlanMode until review rounds are complete |
| review-tier | PostToolUse | Suggest plan review with specialist reviewers after plan file write |
| version-check | PreToolUse | Block push to main when package versions have not been bumped |
| workflow-step-gate | PreToolUse | Check if current workflow step allows agent_work before editing |
| stale-flows | PreToolUse | Warn when editing files belonging to a stale flow definition |
| next-action | Stop | Suggest next actions before session end |
| pr-review | PostToolUse | Prompt for review workflow after PR creation |
| transcript | PostToolUse | Append tool call metadata to session transcript |
| branch-before-commit | PreToolUse | Block git commit on main/master — create a feature branch first |
| worktree-check | PreToolUse | Block concurrent sessions without worktree isolation |
| sprint-completion (3 hooks) | PreToolUse, Stop, PostToolUse | Block PR creation when sprint gates are incomplete |
| worktree-merge | PreToolUse | Block gh pr merge --delete-branch in worktrees (causes false failure) |
| worktree-self-remove | PreToolUse | Block git worktree remove when targeting own cwd |
| phase-boundary | PreToolUse | Block starting sprint in new phase if previous phase cleanup incomplete |
| claim-required | PreToolUse | Warn when editing code without an active sprint claim |
| post-push | PostToolUse | Suggest next workflow step after git push |
| session-briefing | PostToolUse | Inject sprint context on first tool call of session |
| review-stale | Stop | Warn about scored sprints with missing reviews at session end |
| worktree-reuse | PreToolUse | Guide agent to reuse existing worktrees instead of recreating |
Guards are installed via slope hook add --level=full and can be extended with custom guards through the plugin system.
MCP Tools
15 tools available via the SLOPE MCP server — 4 core tools (no store required) and 11 store-backed tools for session and claim management.
Core
search Discover the SLOPE API — functions, types, constants, flows, and codebase map
Parameters
query? string Search term to filter results module? string Filter by module (core, fs, constants, store, flows, inspirations, init, testing, types, map) execute Run JavaScript in a sandboxed node:vm with the full SLOPE API pre-injected
Parameters
code string JavaScript code to execute (must return a value) context_search Semantic code search — returns relevant snippets instead of full files. Falls back to grep without embedding index.
Parameters
query string Natural language query or code concept to search for top? number Max results (default: 5) format? string Output format: paths, snippets, or full (default: snippets) testing_plan_status Show test plan coverage summary: tested, untested, stale, and issue counts per section
No parameters.
Store-Backed
session_status Show active sessions and claims from the SLOPE store store
No parameters.
acquire_claim Claim a ticket or area for the current sprint store
Parameters
target string File or directory to claim scope? string Claim scope (file, directory, module, ticket) ticket? string Ticket key (e.g. S48-1) sprint? number Sprint number check_conflicts Detect overlapping and adjacent conflicts among active claims store
Parameters
sprint? number Filter to a specific sprint store_status Check store health — schema version, row counts, and error status store
No parameters.
testing_session_start Start a manual testing session with git worktree isolation store
Parameters
purpose? string Purpose of the testing session sprint? number Sprint number testing_session_finding Record a finding (bug, observation) during an active testing session store
Parameters
description string Finding description severity? string Severity (low, medium, high, critical) ticket? string Related ticket key testing_session_end End the active testing session, return summary, cleanup worktree store
Parameters
session_id? string Specific session ID to end skip_cleanup? boolean Skip worktree cleanup testing_session_status Show active testing session info and findings store
No parameters.
workflow_next Get the next step in a workflow execution. Returns step info or completion status. store
Parameters
execution_id? string Workflow execution ID (optional if session_id provided) session_id? string Session ID to find active execution workflow_complete Complete the current step and advance the workflow execution. store
Parameters
execution_id string Workflow execution ID step_id string Step ID being completed output? object Step output data exit_code? number Exit code for command steps workflow_status Show workflow execution status with progress (completed/total steps). store
Parameters
execution_id? string Specific execution ID, or omit for all active Configure via .mcp.json in your project root. Requires pnpm -r build before first use. Store-backed tools require an active SLOPE store (.slope/slope.db).
Metaphor Engine
Golf is the default, but SLOPE's scoring engine is metaphor-agnostic. 7 built-in metaphors translate the same internal types into different vocabularies.
Try switching — all highlighted terms across this page will update.
Set via slope init --metaphor=gaming or the metaphor field in .slope/config.json. CLI commands accept --metaphor=<id> to override per-command. Plugins can register custom metaphors.
Plugin System
Extend SLOPE with custom metaphors, guards, and scoring logic. Plugins are discovered automatically from node_modules or local paths.
metaphor Register custom vocabulary mappings for any domain guard Add custom guards that fire on tool use events slope-plugin.json):{
"name": "slope-plugin-scrum",
"version": "1.0.0",
"types": ["metaphor", "guard"],
"metaphors": ["./metaphors/scrum.js"],
"guards": ["./guards/standup-check.js"]
} Batch & Loop Execution
slope loop runs sprints autonomously — single, sequential, or in parallel. Dependency-aware scheduling, failure recovery, convergence detection, and cost-optimized model routing.
Execution Modes
slope loop run --sprint=<id> Execute a single sprint. Supports --dry-run to preview without executing and --executor=aider|slope to choose the agent.
slope loop continuous --max=N --staging Sequential multi-sprint execution. Respects dependency order, retries on failure. --staging creates a branch and opens an umbrella PR when done.
slope loop parallel --max-parallel=N N-sprint concurrent execution. Greedy module-overlap detection ensures parallel sprints don't touch the same files. Default max: 3.
slope loop ab --sprint=<id> A/B testing: runs the same sprint with both executors and produces a per-ticket comparison table.
Key Features
Sprints declare depends_on arrays. Only sprints with all dependencies completed are eligible to run.
Configure maxRetries and retryStrategy (escalate to stronger model, or regenerate the plan).
slope loop convergence analyzes score trends to detect improvement, plateau, or reversion. Stops the loop when diminishing returns kick in.
slope loop config recommend suggests cost-optimized models based on historical success rates per complexity level and sprint type.
Analysis & Reporting
| Command | Purpose |
|---|---|
| slope loop status | Loop progress, next sprint, current config |
| slope loop results --since=<date> | Aggregated batch results |
| slope loop convergence | Improvement rate, plateau, reversion analysis |
| slope loop models --analyze | Model selection analytics and performance |
| slope loop analyze | Mine scorecards to auto-generate backlog items |
Team & Multi-Agent
SLOPE supports multi-agent sprints with 5 built-in roles. Each role gets a filtered briefing, preferred clubs, and separate handicap tracking.
Built-in Roles
generalist Generalist Default role — no special focus area filtering
backend Backend API, database, server-side logic specialist
frontend Frontend UI, components, styling, accessibility specialist
architect Architect Cross-package dependencies, API surface, tech debt specialist
devops DevOps CI/CD, deployment, infrastructure specialist
Multi-Agent Coordination
Weighted average of all agents' handicaps, adjusted for role distribution
Per-agent stats within a sprint — who took which shots, individual accuracy
Ticket claiming system prevents two agents from working the same shot
Auto-generated team standups from session data across all agents
Ranks players by handicap with improvement trend over time
Per-player rolling stats computed independently from team aggregates
Structured progress/blockers/decisions/handoffs for each agent per session
Read the Green — How roles change the game
Roles aren't just labels — they change the briefing content. A backend agent gets hazards about database migrations surfaced prominently; a frontend agent gets accessibility gotchas. The same pre-round briefing is filtered through the role's lens.
In a multi-agent sprint, the team handicap is more useful than individual handicaps. It tells you: "when these agents work together on this kind of sprint, they average X over par." This predicts team performance better than any individual metric.
Multi-Repo & Org
slope org aggregates metrics across multiple repositories. Track org-level handicap, surface recurring patterns that span repos, and get a unified view of sprint health.
Commands
slope org init Creates .slope/org.json with a repos array template. Add paths to your other SLOPE-tracked repositories.
slope org status --json Cross-repo dashboard showing each repo's sprint count, latest score, and handicap. Computes an org-level handicap from all repos' scorecards.
slope org issues --json Surfaces recurring hazard patterns that appear in 2+ repos. Promotes cross-repo common issues so all agents benefit from shared learnings.
Configuration
{
"repos": [
{ "name": "api", "path": "../api" },
{ "name": "web", "path": "../web" },
{ "name": "mobile", "path": "../mobile" }
]
} Weighted average across all repos, computed from the last 10 scorecards in each repo.
When the same hazard pattern appears in multiple repos, it's promoted to an org-level common issue. All agents get warned.
Context Compression
AI agents work within finite context windows. SLOPE minimizes token usage through guard deduplication, compact briefings, and compaction-safe handoff state — so your agent gets the signal without the noise.
Compression Mechanisms
Guards that fire repeatedly within a session are deduplicated. The session-briefing guard fires exactly once per session. Repeated scope-drift and commit-nudge warnings are suppressed after the first occurrence.
Briefings include only the top 3 most relevant hazards, ranked by recency and frequency. Instead of dumping every historical pattern into context, your agent sees just the ones most likely to bite.
slope briefing --compact produces a ~200-token summary: handicap, fairways, GIR, top hazards, and claims. Designed for token-constrained environments or post-compaction re-injection.
When your agent's context window compresses, the compaction guard saves structured session state to .slope/handoffs/ — git state, active claims, review phase, sprint context. The session-briefing guard re-injects this on the next tool call.
slope loop guide --check enforces a word ceiling on your SKILL.md / CLAUDE.md files. Keeps agent instructions within a manageable budget.
Critical guard state (hazards, scope-drift, claims) is persisted to .slope/guard-state/ with 7-day pruning. Survives context compaction because it's on disk, not in memory.
Session Management
slope session coordinates multiple agents working on the same codebase. Live dashboards, structured handoffs, and per-ticket agent assignment.
Commands
| Command | Purpose |
|---|---|
| slope session dashboard | Live view of all active agents — claims, staleness, swarm grouping |
| slope session handoff --from --to | Structured context transfer between agents |
| slope session assign --ticket --agent | Assign specific tickets to specific agents |
| slope session plan | Ticket-to-agent assignment matrix for the sprint |
The claim-required guard prevents two agents from editing the same files. Cross-session overlap alerts fire when claims conflict.
Structured JSON handoffs in .slope/handoffs/ capture git state, active claims, review phase, and sprint context — so the receiving agent has full situational awareness.
Smart Model Routing
SLOPE's model selector chooses the optimal AI model for each task based on complexity, sprint type, and historical success rates — balancing quality against cost.
5-Layer Routing Hierarchy
Estimated tokens > 24k? Route to API model for larger context window.
Multi-package changes (2+ file groups)? Route to API for broader reasoning.
Documentation or roadmap work? Route to API for natural language strength.
Historical success rates from your scorecards — cost-adjusted to prefer better value (success_rate / cost). Checked at club+type, club+strategy, then club-only granularity.
Putter/wedge/short iron → local model. Long iron/driver → API model.
Run slope loop config recommend to see model recommendations based on your scorecard history, or slope loop models --analyze for per-model performance analytics.
Subagent PR Reviews
slope review run generates isolated review prompts from your PR diff and dispatches them to fresh subagents — each with clean context and no implementation memory.
Evaluates structure, patterns, and design decisions. Flags over-engineering, missing abstractions, and architectural drift.
Line-level review: bugs, edge cases, security, naming, test coverage. Focused on implementation quality.
| Command | Purpose |
|---|---|
| slope review run | Review current branch diff (both architect + code) |
| slope review run --pr=N | Review a specific pull request by number |
| slope review run --type=architect | Architecture review only |
| slope review run --json | Output structured JSON for CI integration |
Guard Audit & Metrics
Understand your guard system's enforcement posture and effectiveness with built-in audit and metrics commands.
slope guard audit Groups all guards by enforcement type: mechanical (blocks the action), advisory (injects context), or mixed (both). Shows which guards write disk state for compaction survival.
slope guard metrics Per-guard execution statistics: allow/deny/context/silent counts and block percentage. Identifies which guards fire most often and which have the highest block rate.
slope guard recommend Suggests guards you haven't enabled yet, ranked by relevance to your workflow and repo profile. Workflow-aware — recommendations change based on whether you use sprint-standard or a custom workflow.
Critical guard state persists to .slope/guard-state/ with 7-day pruning and 24-hour staleness detection. Survives context compaction.
Guards that consistently catch real issues can be promoted from advisory to mechanical using slope guard recommend — tightening enforcement over time.
Escalation System
SLOPE detects when things are going off-track and surfaces warnings before small issues become big ones. Available in both auto-detect and manual modes.
| Trigger | Severity | Action |
|---|---|---|
| blocker_timeout | Critical | Agent blocked longer than threshold (default 15 min). Logs event, marks blocked, surfaces in standup. |
| claim_conflict | Critical | Overlapping scope between agents. Detected from claim registry, surfaces in standup. |
| test_failure_cascade | Warning | Excessive test failures across swarm (default threshold: 10). Logs event, surfaces in standup. |
| manual | Varies | Agent explicitly flags an escalation with severity and context. Feeds into handicap and common-issues. |
Run slope escalate during post-shot or post-hole routines. SLOPE scans the current sprint's data for trigger conditions and surfaces warnings.
Agents can manually flag escalation events with severity and context. These feed into the handicap computation and common-issues database.
Flow Tracking
Map user-facing workflows (OAuth, checkout, onboarding) to code paths so agents can navigate by intent, not just by file.
slope flows init Create a flows template at .slope/flows.json slope flows list Show all defined flows with their file mappings slope flows check Validate flow definitions and detect stale mappings Flows are also accessible via MCP: search({ module: 'flows' }) returns all flows, filtered by id, title, or tags. The stale-flows guard warns when editing files belonging to a stale flow.
Inspiration Tracking
Track external OSS projects and ideas you want to adapt into your codebase. Link inspirations to sprints so the context travels with the work.
slope inspirations add Register an OSS project with extracted ideas slope inspirations list Show tracked inspirations, filterable by status slope inspirations link Link an inspiration to a sprint for context Inspirations are stored in .slope/inspirations.json and accessible via MCP: search({ module: 'inspirations' }) returns all tracked sources, filterable by status or project name.
Roadmap Tools
Plan multi-sprint projects with dependency tracking, critical path analysis, and parallel execution opportunities.
slope roadmap validate Check for structural issues, dependency cycles, numbering gaps slope roadmap show Display dependency graph and parallel tracks slope roadmap review Automated architect review: scope balance, bottlenecks Roadmaps are defined in docs/backlog/roadmap.json with sprints, tickets, dependencies, and phases. The critical path computation identifies which sprints block the most downstream work.
Tournament Review
Track multi-sprint initiatives as a cohesive unit.
Total par/score, landing rate, best/worst sprint
Avg slope, total hazards, fairway/GIR rate
Recurring patterns across the initiative — what keeps biting you
Which approaches worked best across all sprints in the tournament
slope tournament Aggregate scorecards from a multi-sprint initiative into a single tournament summary with combined stats and hazard analysis.
Read the Green — When to use tournament mode
A tournament groups related sprints — like "auth system rewrite" or "v2 migration" — into a single narrative. Individual scorecards show per-sprint performance; the tournament shows the initiative-level trajectory.
The hazard index is especially useful here. A hazard that appears once is a fluke. A hazard that appears in 4 of 6 sprints is a systemic issue that needs a dedicated fix — and SLOPE surfaces it automatically.