Agent Workflow
Sidecar is designed to work with AI coding agents out of the box.
When an agent reads your repository's AGENTS.md file and follows its instructions,
Sidecar becomes shared project memory — context that persists between sessions, regardless of
who (or what) is doing the work.
The core idea
Every AI coding session starts cold. The agent has no memory of previous sessions, no understanding of why decisions were made, and no list of outstanding follow-up work.
Sidecar solves this by giving agents a structured place to:
- Read project context before starting
- Record decisions and reasoning as they work
- Log what they changed and which files they touched
- Leave tasks for the next session
- Refresh the project summary on the way out
The result is a project that accumulates memory over time — readable by humans in
.sidecar/summary.md and accessible to agents via sidecar context.
AGENTS.md
When you run sidecar init, it creates .sidecar/AGENTS.md.
This file is the agent's playbook — it tells any AI coding agent how to interact with
Sidecar before, during, and after a work session.
Most AI coding tools (like Claude Code, Cursor, Aider, and others) will automatically
read AGENTS.md or similar instruction files. You can also reference it
directly in your system prompt or project instructions.
# Agent Instructions — Sidecar
Sidecar is the local project memory tool for this repository.
## Before starting work
Run this to get context from previous sessions:
$ sidecar context --format markdown
## After completing work
Always record a work log:
$ sidecar worklog record \
--done "<what changed>" \
--files <paths> \
--by agent
If a design or architectural decision was made:
$ sidecar decision record \
--title "<decision>" \
--summary "<why>" \
--by agent
If follow-up work remains:
$ sidecar task add "<follow-up>" --priority medium --by agent
Always refresh the summary:
$ sidecar summary refresh The agent workflow, step by step
1. Read context before starting
Before any work begins, the agent reads current project context:
$ sidecar context --format markdown This outputs recent decisions, open tasks, recent work logs, and notes. The agent now knows what happened in previous sessions, what decisions shaped the codebase, and what work is outstanding — without reading the entire git history.
2. Do the work
The agent does whatever it was asked to do. Nothing special here.
3. Record decisions made during work
If the agent made an architectural or design choice — picked a library, changed a data model, chose an approach over another — it records the reasoning:
$ sidecar decision record \
--title "Switched to in-memory cache for session tokens" \
--summary "Redis was overhead for v1; revisit if session volume grows" \
--by agent 4. Log the completed work
The agent records what it did, what the goal was, and which files changed:
$ sidecar worklog record \
--goal "Refactor session handling" \
--done "Moved session logic to src/session.ts, added tests" \
--files src/session.ts,src/auth.ts,tests/session.test.ts \
--by agent 5. Add follow-up tasks
If the agent noticed work that should happen later:
$ sidecar task add "Add session expiry tests" --priority medium --by agent 6. Refresh the summary
Finally, the agent refreshes the project summary so the next session starts with up-to-date context:
$ sidecar summary refresh Why this matters
Without Sidecar, every AI session starts from scratch. The agent reads the code, makes inferences about intent, and has no way to know:
- Why a particular approach was chosen over alternatives
- What the team decided to defer until later
- What conventions the project follows
- What follow-up work was planned
With Sidecar, this information accumulates over time. Each session — human or agent — leaves the project a little better documented than before.
The --by agent flag
Many Sidecar commands accept a --by agent flag. This tags the record
with its origin so humans can see which records were created by agents versus by
human developers. It's optional but useful for auditing and transparency.
Good practice
Include --by agent in all records
created during automated or AI-assisted sessions. It keeps the project history
auditable and clear.
Putting it in CLAUDE.md or AGENTS.md
Different AI tools read different instruction files:
- Claude Code reads
CLAUDE.mdat the project root - Other agents may read
AGENTS.md,.cursorrules, or similar
Copy the workflow instructions from .sidecar/AGENTS.md into whatever
instruction file your agent reads. The Sidecar commands are the same regardless
of which agent is running them.