Meet Tammy
I built an AI assistant. I call her Tammy. The name is short for T2, which stands for Tom and Tammy. I’m Tom. She’s the other half of a two-agent architecture that runs on Claude Code.
I want to be clear about something up front: I did not write code to build this. I’m a 63-year-old Navy veteran and journalist. I spent 12.5 years as a Fire Control Technician on weapons guidance systems, then transitioned into IT after the service. I can describe a system. I can spec requirements. I cannot write Python or JavaScript. What I did was describe what I needed in plain English, and Claude built it from those specs.
That framing matters. Tammy is not a product for developers. It’s proof that you can build a serious, production-grade AI workflow system without writing a single line of code yourself.
What Tammy actually does
Tammy handles roughly 80% of my routine daily work. I run two publications, The Palm Bayer (hyperlocal Palm Bay news) and Space Coast Defense (aerospace and defense intelligence). The workload is substantial: article research, drafting, fact-checking, video production, social distribution, meeting coverage, Substack publishing, email triage, calendar management, and a constant stream of civic tracking work.
Tammy does most of that. Not perfectly, not without direction. But she does the legwork so I can focus on the judgment calls.
The core capabilities are persistent memory, a 26-agent stack, 25+ custom skill workflows, lifecycle hooks that enforce consistency, and multi-AI integration. When a task needs Gemini or Perplexity, Tammy routes it there. One orchestrator, multiple AI backends.
Here’s how I described it on the Tammy site: “TAMMY has a permanent operating brief. The system knows your role, your projects, your style guide, your contacts, and 1,000+ memories about how you like things done.”
That is the key differentiator. This is not a chatbot you reset every session. It knows who I am, what I’m working on, and how I want things done. Every session starts from that foundation.
The numbers
- 26 agents across three model tiers (Opus orchestrates, Sonnet judges, Haiku extracts)
- 25+ custom skills with tab autocomplete
- 57+ voice calibration banned words (I don’t want AI writing like AI)
- 4 lifecycle hooks that fire at session events
- 1,400+ memories stored in a ChromaDB-backed MCP server
- Cold-start context reduced from ~15,600 tokens to ~2,800 tokens after optimization
The agent stack
The 26 agents are specialized personas. Each one has a mission statement, a protocol, output format requirements, quality standards, and scope limits. They’re not generic “do stuff” prompts. A fact-check agent behaves differently than a social media agent. The Palm Bayer agent knows Palm Bay municipal law. The video production agent knows the FFmpeg pipeline.
Model tier selection is intentional. Opus handles complex editorial decisions. Sonnet handles judgment and synthesis. Haiku handles high-volume structured extraction like VIP roster updates or transcript speaker mapping. Using the right tier for the job matters for both quality and cost.
Who this is for
I built Tammy for myself. But the system design generalizes to anyone running information-intensive work. Journalists, lawyers managing multiple cases, project managers, executives who need daily briefings. The common thread is: you’re drowning in information and routine tasks, and you need something that actually knows your context rather than starting from zero every time.
The documentation site (which you’re reading from now) was built to show what’s possible and give people a path to build their own version. The interview-driven setup approach means no templates, no placeholders. Your version of Tammy is built from your own words and your own workflow.
Built on what
Tammy runs on Claude Code, Anthropic’s command-line tool. It requires at minimum a Claude Pro subscription. I run the Max plan because mid-task context limits on lower tiers disrupted workflows too frequently. The Max plan is what makes it viable for real daily use.
The infrastructure is Python scripts, shell hooks, markdown files, and an MCP memory server. No databases beyond ChromaDB. No servers beyond the Contabo VPS I use for hosting. The whole thing lives in a local directory on my Windows 11 machine.
That’s Tammy. Direct, functional, built to actually work. The rest of this site gets into how to get started and the specific how-tos that took me the longest to figure out.