← Back to Blog

Beads: Memory Systems for the Agentic Era

Steve Yegge's Beads solves the 'agent amnesia' problem with a git-backed issue tracker designed for AI agents. What does this mean for the future of AI-assisted development?

Steve Yegge—legendary software engineer, prolific blogger, and the person who accidentally published that internal Google rant about platforms—just released something that might reshape how we think about AI coding assistants.

It's called Beads, and it's deceptively simple: a distributed, git-backed issue tracker. But what makes it remarkable isn't the issue tracking. It's who the issue tracker is designed for.

Beads is built for AI agents, not humans.

The 50 First Dates Problem

If you've used AI coding assistants for any serious project, you've experienced the frustration Yegge calls "agent amnesia."

Every new session starts from zero. The agent has no memory of what you built yesterday, what approaches failed, which files you touched, or why certain decisions were made. You're essentially re-introducing yourself to your codebase every single time.

Yegge describes it perfectly: it's like the movie 50 First Dates, where the protagonist must rebuild her understanding of the world every morning. Except instead of a romantic comedy, it's you explaining your authentication architecture for the fifteenth time.

The current workaround? Messy markdown plans. Enormous context files. Copy-pasting previous conversations. All of which eat into the precious context window that could be used for actual coding work.

What Beads Actually Does

Beads approaches this problem with a clever architectural insight: use Git as a database.

Issues are stored as JSONL (JSON Lines) in a .beads/ directory within your repository. This means your project's task state is:

    1. Version controlled alongside your code
    2. Branchable for parallel workstreams
    3. Mergeable when branches come together
    4. Distributed without requiring any server
But here's where it gets interesting for AI agents.

Dependency-Aware Ready Work

Unlike flat task lists, Beads tracks relationships between issues. When an agent runs bd ready, it doesn't just get a list of open tasks—it gets only the tasks with no open blockers. The dependency graph ensures agents work in the correct order without wasting context figuring out what's actually actionable.

Collision-Free IDs

When multiple agents (or humans) create tasks simultaneously across branches, traditional sequential IDs collide. Beads uses hash-based IDs (bd-a1b2, bd-f14c) that scale gracefully—4 characters for small projects, 5 for medium, 6 for large. No coordination required.

Provenance Tracking

When an agent discovers new work while completing a task, it can mark the relationship: "discovered-from." This creates an audit trail of how tasks were found, building institutional memory about your project's evolution.

Semantic Compaction

This is perhaps the most forward-thinking feature. Old, completed tasks undergo "memory decay"—an LLM summarizes them into condensed records. The detailed implementation notes from six months ago become a one-paragraph summary, preserving context without overwhelming future sessions.

The Design Philosophy

What makes Beads genuinely interesting isn't any single feature. It's the design philosophy underlying all of them.

Context is precious. Every token in an AI's context window is valuable real estate. Beads treats project knowledge as a queryable database rather than a document to be loaded wholesale. Agents retrieve what they need, when they need it.

Git is good enough. Rather than building another hosted service with accounts and subscriptions and APIs, Yegge recognized that Git already solves distributed state synchronization. If your code lives in Git, why shouldn't your project memory?

Agents are first-class citizens. The CLI outputs JSON by default. Commands are optimized for machine consumption. The tool acknowledges that in many workflows, AI agents will interact with project state more frequently than humans.

What This Means for AI-Assisted Development

Beads represents a broader shift in how we think about development tooling. We're moving from tools built for humans that AI can awkwardly use, to tools built explicitly for human-AI collaboration.

The implications are significant:

Project management becomes queryable. Instead of context-stuffing, agents can ask specific questions: "What's blocking task X?" or "Show me everything discovered from the authentication refactor."

Sessions become continuous. The arbitrary boundary between "sessions" starts to dissolve when state persists intelligently. An agent picking up your project should feel like resuming a conversation, not starting over.

Work becomes traceable. With provenance tracking and structured dependencies, you can understand not just what was done, but why and how decisions emerged.

At LightSprint, we've been thinking about similar problems. Our automatic todo completion feature uses AI to analyze commits and match them against tasks—essentially giving your task board memory about what code changes accomplish. When you push a commit, LightSprint's AI looks at the changes and determines which tasks should be marked complete, without you needing to update anything manually.

But Beads goes further in some ways. The dependency-aware ready queue is particularly elegant—surfacing only truly actionable work prevents the paralysis of looking at a flat list where half the items are blocked on something else.

The CLI-First Approach

One of Yegge's more opinionated choices is prioritizing CLI over MCP (Model Context Protocol) schemas. His reasoning: a simple CLI with JSON output uses 1-2K tokens for tool definitions, while comprehensive MCP schemas can consume 50K.

This is a real tradeoff. MCP provides rich semantics and type safety. But for resource-constrained agent loops, simpler often wins. The insight isn't "CLI good, MCP bad"—it's that tool design should account for the context cost of using the tool.

Reflections: What Can We Learn?

Building LightSprint, we've wrestled with many of the same challenges Beads addresses. A few lessons stand out:

Structure beats prose. When AI agents work with task information, structured data with explicit relationships dramatically outperforms unstructured descriptions. Our task generation system produces structured outputs—related files, implementation todos, complexity assessments—because agents can act on structure more reliably than they can parse paragraphs.

The gap between "open" and "actionable" matters. Beads' ready queue acknowledges something obvious in retrospect: not all open tasks are equal. Some are blocked, some depend on external factors, some are waiting for information. Surfacing truly actionable work—tasks you can start right now—reduces cognitive load for both humans and agents.

Memory management will become critical. As projects grow, naive approaches to context ("just dump everything") stop working. Beads' compaction strategy—summarizing old information while preserving recent detail—mirrors how human memory actually works. We're likely to see more tools adopting similar approaches.

Git as substrate has legs. By building on Git rather than alongside it, Beads inherits branch-based isolation, merge semantics, and distributed sync for free. It's a reminder that sometimes the best infrastructure is infrastructure that already exists.

LightSprint takes a different architectural approach—we're a hosted platform with deep GitHub integration rather than a CLI tool living in your repo. But the underlying insight is the same: AI-assisted development needs tooling that treats agents as real participants, not just fancy autocomplete.

The Road Ahead

Beads is four weeks old as of this writing, already has dozens of contributors, and by Yegge's count "tens of thousands" of users. That's remarkable adoption for something so new.

But more interesting than the adoption is what it signals. The AI coding space is evolving from "how do we make AI write code?" to "how do we make AI participate in development workflows?" These are very different questions with very different answers.

Writing code is a single-shot problem. You prompt, you receive, you're done. Development workflows are continuous, stateful, collaborative affairs where context accumulates over days and weeks and months.

Tools like Beads—and hopefully tools like LightSprint—are the early experiments in what AI-native development infrastructure might look like. Not AI bolted onto existing tools, but tools built from the ground up for a world where AI agents are regular collaborators.

The agent amnesia problem is real. But it's also solvable. And watching the community figure out how to solve it is one of the most exciting things happening in developer tooling right now.


Beads is open source and available at github.com/steveyegge/beads. If you're building AI agents or interested in agent memory systems, it's worth exploring.

Interested in AI-native project management? Try LightSprint—where your task board stays current with your code, not against it.