Why I deleted the bot I spent two weekends building
Three weeks ago, I shut down a Telegram bot I'd spent two full weekends building. It could manage my tasks, read my calendar, summarize my meetings, deploy background agents, transcribe voice messages, and analyze photos. It had natural language routing, persistent memory, and parallel execution. I was proud of it.
I don't miss it at all.
The pitch I sold myself
The idea felt obvious at the time. I wanted a personal AI assistant I could reach from anywhere, mainly from my phone. Something I could text "what's on my calendar tomorrow" or "remind me to follow up with the client" and it would just handle it. No apps to open, no dashboards to check. Just a conversation.
Telegram was the perfect shell. Fast, cross-platform, clean bot API. So I built it. A Python daemon running on my laptop, listening for messages, routing them through a lightweight classifier, connecting to my calendar, my task tracker, my git repos. Voice messages got transcribed locally with an open-source speech model. Photos went through vision APIs. It could spawn parallel agents for complex tasks and report back with a live progress dashboard.
The engineering was fun. I kept adding features because each one took an afternoon and the system kept working. Natural language routing, a memory layer backed by SQLite, background deployment, live dashboards. The kind of project where you look up and realize you've been building for twelve hours.
Here's the part I didn't expect: the problem with my bot wasn't building it. It was using it.
What actually happened
In practice, I barely touched the thing. Not because it was broken. It worked fine. The problem was more fundamental.
Every time I wanted to do real work, I was already at my laptop. Terminal open, editor visible, files right there. And when I texted my bot "create a new component for the settings page," it would kick off an agent, and I'd sit watching a chat window for status updates while my actual development environment sat idle behind it.
I was using my phone to talk to my laptop. The absurdity of this took longer to register than I'd like to admit (about two weekends longer, to be exact).
There's a concept in product thinking that the best interfaces disappear. You stop noticing the tool and just do the work. My bot was the opposite. It was another window, another context switch, another place to check. I'd built an elaborate intermediary to do things I could already do faster by typing where I was already typing.
The workshop, not the intercom
Jony Ive once described how his design team at Apple arranged their studio so that every tool, material, and prototype was within arm's reach. Nobody had to leave the room to find what they needed. The physical environment itself was the productivity system. Not a process document. Not a project management tool. The room.
I keep coming back to this image when I watch people (including myself, clearly) try to build the "personal AI assistant" as a standalone product. A chatbot. A dedicated app. A new surface with its own interface and its own personality. The instinct is always to create something new. Something you can point at and say "that's my AI."
But the most productive version of AI is not a new room you walk into. It's the tools in your existing workshop getting smarter. You don't want an intercom system to reach a brilliant assistant in another building. You want your workbench to understand what you're building.
The bot was an intercom. What I needed was a better workbench.
What changed when I leaned in
I'd been using a terminal-based AI tool for development work alongside the bot. It reads files, writes code, runs commands, understands project structure. Standard stuff. After I shut the bot down, I stopped treating it as just a coding assistant and started pushing it into the rest of how I work.
The shift was immediate.
Instead of texting a bot "what meetings do I have today," I'd ask the same question in my terminal. It would check my calendar, pull action items from my meeting notes tool, cross-reference them against my task tracker, and tell me what to focus on. Same capabilities as the bot. But I was already in my workspace. No context switch, no picking up my phone, no waiting for a message to come back through a chat interface.
Recent improvements to the tooling made this feel qualitative, not just incremental. Persistent context across sessions means I don't re-explain my projects every morning. Integrations with my calendar, meeting transcripts, and task management mean it handles coordination without me switching windows. It runs background processes while I keep working in the foreground. It maintains its own notes so it remembers what we decided yesterday.
This is exactly what I was trying to build with the bot. I just built the wrong interface for it.
Cognitive latency
There's a type of latency nobody benchmarks. Not network latency or model inference time. Cognitive latency. The time your brain spends switching contexts, finding the right window, re-orienting to where you left off.
My bot added a hop. Phone to bot to laptop to code. The terminal setup is zero hops. I think a thought, I type it where I'm already sitting, it happens.
Each individual context switch is tiny. Maybe two seconds. But across a full day of switching between a chat window and a terminal and a browser and a phone, those seconds compound. Not into minutes. Into a fundamentally different way of working. It's the difference between shipping something today and adding it to next week's list.
I see this pattern repeating everywhere right now. People building AI wrappers, AI dashboards, AI command centers. New surfaces for intelligence that ask the user to come to them. And I get it. It's the natural instinct. Build something visible. Something you can demo.
But the tools that stick are the ones that go to the user. That show up in the terminal for developers, in the document for writers, in the spreadsheet for analysts. Not as a separate app. As intelligence that appears wherever you were already going to be.
What I took from this
I spent two weekends building something genuinely impressive. A real system with real capabilities. The thing that actually changed how I work was something I already had. I just had to stop building around it and start building with it.
If you're trying to figure out where AI fits into your workflow, I think the question worth asking isn't "how powerful is this" but "how close is this to where I already work." Power that requires a context switch is power with a tax on it. And that tax is higher than most people think.
I still have the bot's code in a folder somewhere. I keep it around the way you keep a first draft. Not to use again, but to remember what you learned writing it.