#006 - Issue #006
January 7, 2026
Enjoying this newsletter?
Subscribe to get future issues
Issue #006: From Autocomplete to Implementor
The mental shift that changed everything
January 06, 2026
Hey, welcome back. Happy New Year 🎇!
Hope you had a great start to 2026.
Here’s something that might sound strange: I haven’t opened an IDE in months.
Not because I stopped coding. Because my role changed. I went from writing code to orchestrating a team of AI agents. AI stopped being a fancy autocomplete and became something I actually delegate to.
The shift happened gradually, then all at once. Here’s what that looks like day to day.
When the Shift Happened
I started noticing it around last summer. Sonnet 3.7, then Opus 4.1. Each model release got incrementally better at actually understanding code instead of guessing.
Then Opus 4.5 landed. And suddenly it wasn’t just completing my thoughts. It was reading my codebase. Finding answers in the actual code instead of hallucinating. Pushing back when I was overthinking something.
That last part matters more than you’d think. AI now tells me when I’m overcomplicating things. It has opinions. Good ones.
I’m still a software engineer. But I spend my time differently now.
What Daily Life Actually Looks Like
Let me show you something concrete.
I have a command called /next. It’s an agent that answers one question: what should I work on today?
I actually have different versions of this for different projects. Plus one in my vault that oversees everything across all projects. Here’s what it does:
Analyzes my git history to see what I’ve been shipping
Reads my daily notes to understand current context
Checks my roadmap and task systems
Looks at Linear and GitHub issues for client work
Synthesizes all of that and recommends what to tackle next
I’m working on connecting it to even more tools. The more context it has, the smarter the recommendations.
The kind of cognitive work that used to take me 30-40 minutes every morning. The mental overhead of triaging and prioritizing.
Now it takes 30 seconds.
Is it always right? No. Maybe 80-90% accuracy. But when it’s wrong, it’s usually my fault. I didn’t document something. I didn’t update a project note. The AI is only as good as the context I give it.
Which brings me back to Issue #003. If you missed it: I keep everything in an Obsidian vault - notes, specs, project context, daily logs. It’s my second brain, and it’s become AI’s interface. When I feed it clean context, it makes smart recommendations. When I don’t, it just doesn’t have enough to work with. The output ends up incomplete. Simple as that.
The Learning Loop
Here’s what happens when /next surfaces something that’s already done or isn’t quite the right priority:
I correct it. We both learn from it.
Two scenarios:
Really off - totally misunderstood the request or requirements
Something minor - more of a preference thing
Either way, I document the correction. Sometimes right in my daily note. Sometimes I update the project’s next.md file. Sometimes I refine the agent itself.
Next time, it’s a little bit better. Both human and AI improve.
That’s the loop. Make a mistake. Document why. System gets smarter. Repeat.
This is meta-prompting in action. I touched on it in Issue #005, but the pattern keeps proving itself: the tooling evolves based on actual usage, not guesses about what might be useful.
Course Correcting, Not Babysitting
There’s a lot of chatter online, particularly on Reddit and 𝕏, from folks who aren’t sold on AI coding yet. The common skepticism: doesn’t it still need constant supervision?
Not really. Not anymore.
I course correct. That’s different from babysitting. Babysitting is checking in every five minutes. Course correcting is steering when it goes off track.
Most days, I give AI a spec or point it at a task. It goes and builds. I review what it produces. If it’s good, we ship. If it needs adjustment, I explain why and it refactors.
The role shifted from writing every line to:
Setting direction - this is where I spend most of my time now
Reviewing outputs
Explaining when something needs to change
Approving what’s ready
Sound familiar? That’s what you do with a junior developer. Except this one writes code faster than any human I’ve worked with. It costs me $200/month. Doesn’t complain. Doesn’t need to sleep or eat. The more I use it and give it context, the better it becomes.
And somehow we’re still early. It’s only going to get better.
What’s Still Missing
It’s not perfect. There are still paper cuts.
Gathering requirements from multiple stakeholders or sources is still manual. AI can’t interview your users for you. It can’t pull together scattered Slack threads and meeting notes into coherent requirements. Yet.
Honestly, it probably could if I gave it access to all those tools. But it burns through a lot of tokens to do so. I haven’t found the right approach yet. It’s something I’m thinking about a lot.
Keeping context in one place is on me. If I don’t update my vault, AI doesn’t know what changed. The second brain needs feeding.
I’ve automated a lot of this. Integrations with Linear and GitHub. Commands like /merged and /done that I run whenever I finish a task. They update my vault automatically. But it still requires discipline.
Code review is the frontier. This is the big one.
Reviewing AI code is fundamentally different than reviewing human code. With a human, you’re checking logic and style. With AI, you need to understand:
How different parts connect together
Why certain decisions were made
Which prompt caused that decision
How code interacts across files
You’re not just reading diffs. You’re tracing back through context and understanding the reasoning.
I’ve tested all the major code review tools. Haven’t found one I love. I’m thinking a lot about what the ideal solution might look like. Maybe something I’ll build. We’ll see.
The Analogy
You know that moment when a new tool clicks and you can’t imagine going back?
Think about how computing evolved. From punch cards to command lines. From command lines to GUIs. Each leap made the previous approach feel impossibly slow.
I’m talking into a microphone right now and somehow that’s translating into this newsletter. Into code. Into working features.
Honestly magic.
The Takeaway
AI didn’t make me obsolete. It let me focus on what I’m best at.
I was already an architect. But I was also doing all the implementation, which slowed me down. Now I can move faster. I can stay in one mode of thinking instead of constantly context switching between designing and typing code.
The discipline isn’t in grinding through implementation anymore. It’s in:
Thinking through requirements clearly
Adding taste and making architecture decisions
Reviewing intelligently
Course correcting quickly
Building feedback loops
Even spec writing is collaborative now. Claude helps me draft specs. But the thinking, the judgment calls, the taste? That’s still mine.
Those are the skills that matter now.
What I’m Building
A few things in the works. Tether is getting closer to launch. And I’m exploring some developer tools ideas that tie into everything I’ve been talking about in this newsletter. Code review, planning workflows, feedback loops. More on this soon.
Follow me on Twitter @jkudish for updates and behind-the-scenes as I build this stuff.
Cool Stuff I’m Testing
Two projects that have been making the rounds on 𝕏 and GitHub. I’m currently evaluating both and will share more if they become part of my workflow:
Beads - A distributed, git-backed graph issue tracker designed for AI coding agents. Replaces markdown task lists with a dependency-aware graph, giving agents persistent memory for long-horizon tasks. Intrigued by whether this could replace Linear in my workflow.
Clawdbot - A friendly little lobster 🦞 assistant that runs locally and connects to WhatsApp, Telegram, and Discord. Voice capabilities, visual canvas, and a surprisingly fun personality. Worth checking out.
Until Next Time
Has this shift happened for you yet? If not, what’s holding you back?
I’d love to hear from you. Hit reply and tell me where you are in this journey.
Keep shipping,
Joey
P.S. I’m curious: what’s the biggest paper cut in your AI workflow right now? The thing that still feels clunky or manual? Hit reply and tell me.
P.P.S. If this resonated, forward it to someone still writing every line themselves. They can subscribe at jkudish.com/newsletter.