#009 - What If Your AI Remembered Yesterday?
February 27, 2026
Enjoying this newsletter?
Subscribe to get future issues
February 26, 2026
Welcome back.
Something happened this week that I can’t stop thinking about.
I was in the middle of an architecture session with an AI agent. We were designing a new feature. Mid-conversation, the agent surfaced a web clipping I’d saved weeks earlier. Related article. Exactly on point. I hadn’t searched for it. I hadn’t mentioned it. The agent knew it was relevant.
That’s when I realized my systems were working.
The zero-context problem
Every AI tool you use today starts from scratch.
Open a new chat: blank slate. The AI has no idea what you worked on yesterday. No idea what decisions you made last week. No idea what you care about. You’re strangers every time.
For one-off questions, that’s fine. But if you’re building products, running client projects, managing a roadmap; the lack of continuity is a real tax. You re-explain context. You re-share links. You re-state your preferences. The AI is capable, but it has amnesia.
I started to notice how much time I was losing to that re-explanation overhead. Not catastrophic, but constant. A background drain.
I wanted an agent that learned from yesterday.
What I built
I built a memory system from scratch, baked directly into the agent platform I’m building.
It’s built with Laravel, PostgreSQL, and pgvector. It runs alongside everything else in my system; no third-party dependency and no external infrastructure to manage.
The architecture has three layers:
Raw transcripts - Every conversation is stored as the source of truth.
Compressed observations - What the agent noticed and extracted from those conversations. Patterns, preferences, context.
Normalized truths - Facts, decisions, and preferences that have been verified and scored. Each truth has a confidence level and a validity window.
That last layer is what makes it different from a basic log. Truths have a lifecycle. They can be superseded when something changes. Contradicted when I make a different call. Archived when they’re no longer relevant. It’s not append-only; the system actively reviews itself and resolves conflicts.
For recall, the system blends semantic similarity with recency, importance, and confidence. Contradictions get penalized. When I’m in a session, it surfaces what’s relevant; not what’s recent.
One pattern I leaned on heavily: the truth lifecycle design from Iris, TJ Miller’s memory system built on Laravel Prism. And Mastra’s observational memory research shaped the three-layer architecture. Both are worth looking into if you’re building something similar.
There’s also an Obsidian vault integration layer. My agent reads from and writes to my Obsidian vault directly. It can search 2,000+ notes for context. High-confidence truths get promoted to vault notes automatically. Clippings I save from anywhere get indexed and become retrievable.
My second brain and my AI’s memory are merging. The vault isn’t a reference the agent checks. It’s a shared knowledge base that both of us contribute to and draw from. Everything I’ve built in Obsidian over the past few years is now context my agent can use.
The glue layer
Memory alone isn’t enough.
An agent that remembers everything but can’t connect to your actual tools is a brain in a jar. The memory has to be fed.
I’ve built several connector packages that plug into the system: GitHub, Google Calendar, Gmail, Slack, Discord, Telegram, X/Twitter, Linear, weather, health tracking, and more. Each one is a Laravel package. And I’ve started the process of open-sourcing some of them; see plume.
Each connector feeds context into memory. When I save a link from Telegram, the agent remembers what it was about. When I merge a PR, the agent knows what shipped. When an email comes in, the agent can surface it later if it becomes relevant.
That’s the glue layer. Memory plus connectors compound in a way that neither does alone.
The clipping that surfaced during my architecture session? My agent had seen it weeks earlier through one of those connectors. Indexed it. Recognized the relevance when the right moment came.
What changes
The practical impact isn’t dramatic. It’s quiet.
My agent knows my projects. It knows my ways of working. It knows what we talked about yesterday, and last week, and three weeks ago when I was thinking through a different problem.
My daily briefings now pull from calendar, email, task systems, git history, and error logs; synthesized by an agent that knows my priorities. What matters to me and why.
The gap between “AI assistant” and “AI that knows you” is memory. And closing that gap changes how you interact with the tool.
Where this goes
I see this as the future for small businesses. Not replacing people. Complementing them.
Imagine a system that knows your company’s decisions, your clients, your processes. A brain that runs alongside the founder and team. It remembers every conversation, every project, every preference. It surfaces the right context at the right time. It gets smarter the longer you use it.
This is what I’m building. Not for myself alone. The platform is designed to be deployed for other businesses. Each one gets their own instance, their own memory, their own connectors wired to the tools they already use.
A small team with a system like this operates like a much larger one. The institutional knowledge doesn’t walk out the door. The onboarding for new team members includes an agent that already knows how things work.
If you want a system like this built for your business, reach out. That’s the kind of work I’m interested in taking on for clients now.
Start small
You don’t need all of this to get started. Even a structured notes file that your AI reads before each session gives you continuity. Paste in your projects, your preferences, your recent decisions. It’s manual, but it works.
If you want to go further, the building blocks exist. Vector databases, structured prompts, knowledge base integrations. The ecosystem is getting there regardless of your stack.
I’ve started open-sourcing pieces of what I’ve built, and I’ll continue to. You can follow along on GitHub.
Cool stuff
pi.dev by Mario Zechner
A customizable agentic harness I’m starting to use for certain workflows. Multi-provider LLM API, agent runtime, interactive CLI, and Slack bot delegation. What drew me in: it’s built to be shaped around how you work, not the other way around. Also the engine behind OpenClaw.
Ars Contexta
github.com/agenticnotetaking/arscontexta
A Claude Code plugin that generates a personalized knowledge management system through conversation. Instead of templates, it derives a vault architecture from how you think and work. Three spaces: identity, knowledge graph, and operational state. If you’re interested in the vault-meets-AI-memory direction I wrote about above, this is a different take on the same idea.
What I’m shipping
Plume is live and open source. My X/Twitter API v2 client for Laravel. Full API coverage, test fakes, rate-limit aware. On Packagist as jkudish/plume.
Librarium is also open source. Multi-provider deep research CLI. Fan out queries to 10 search and AI APIs in parallel. Available on npm, Homebrew, and as a standalone binary.
Tether beta has opened. I've onboarded the first few users and am sending new invites every few days. I was also on the Nomad Summit podcast talking about Tether. The episode was just published.
Until next time
What’s the first thing you’d want your AI to remember about you?
Keep shipping,
Joey
P.S. If this was useful, forward it to a founder who’s tired of re-explaining context to ChatGPT. They can subscribe at jkudish.com/newsletter.