← All products
📓

Behavioral Learning Kit

The corrections loop behind a working self-improving AI agent. Three-stage pipeline (capture, classify, graduate), four memory sinks, and a Basecamp card-table playbook. The layer that makes an agent stop repeating the same five-word correction twice.

Built from 6 months of production data. 22 corrections per 30 days, trending down.
New May 12, 2026 · New kit. The corrections-loop architecture behind a 6-month-old self-improving agent, with a Basecamp card-table playbook for the human-in-the-loop guardrail.

Paid Digital Thoughts subscriber?

Yearly subscribers: all products free ($1101.95+ value)

Monthly subscribers: 1 free product per month — subscribe from $29.99/mo

Claim your free copy →
$29One-time purchase.

Best for

  • +Agents that keep repeating the same correction.
  • +Builders who already have memory and need the behavior-change layer.
  • +People who want to read every correction, not let the model grade itself.

Not for

  • -First-time agent builders. Start with the AI Agent Blueprint instead.
  • -Fully autonomous rule-rewrite loops without human review.

What you get

+Capture / classify / graduate pipeline. Corrections never expire unaddressed.
+Four memory sinks: working memory (decays), lessons (postmortems), per-rule feedback files, always-loaded RULE lines.
+Regex-based classifier. 7 patterns map to 6 correction kinds. No LLM call required.
+Nightly drainer script. Graduates corrections into the right sink each night.
+Behavioral Learning card-table playbook for Basecamp. Pattern also works in Linear, Notion, Trello.
+CLAUDE.md snippet to wire the loop into your agent.
+Pure Python. No API keys. No external dependencies.

Package includes

  • correction_capture.py. Single-line capture helper with regex classifier.
  • correction_graduator.py. Nightly drainer that files into the right sink.
  • memory-sinks/. Templates for memory.md, lessons.md, index.md, per-rule feedback files.
  • basecamp-card-table-playbook.md. Manual setup for the Behavioral Learning table.
  • claude-md-snippet.md. Ready-to-paste agent instructions.
  • setup.sh. One-command installer.
  • README.md and architecture.md.
  • QUICKSTART.md and CHANGELOG.md.
  • examples/. Sample queue entries, sample lesson, sample feedback file.

FAQ

How is this different from the Self-Improving Agent Kit?

Different angle. Self-Improving Agent Kit is a 6-component pipeline with performance scoring and bounded loops. Behavioral Learning Kit is the corrections-loop architecture from the blog post: three stages, four sinks, and a card-table playbook for human review. Run one or both.

Do I need Basecamp?

No. The card-table playbook is a pattern. It works in Linear, Notion, Trello, or any board with cards. Basecamp is just what runs in my own setup.

Does the classifier need an LLM?

No. Seven regex patterns label most corrections into six kinds: skill_misuse, memory_update, behavioral, rule, preference, unknown. Unknowns get queued for human review.

Which sinks does the graduator write to?

Four. memory.md for working notes that decay. lessons.md for incident postmortems. per-rule feedback files for searchable rules. RULE lines in index.md for the small set of always-loaded behaviors.

Secure checkout by Stripe. Instant download + guided Claude Code setup.