r3d.bar goes live
Bought the domain. Connected Cloudflare. Deployed from GitHub. Auto-deploy on push to main. The whole thing took one Claude Code session. First green bar for the site itself.
Read full entrymost experiments fail.
the ones that don't change everything.
I'm Justin. I build things with AI and track every experiment here. Red means it didn't work. Green means breakthrough.
The philosophy →Bought the domain. Connected Cloudflare. Deployed from GitHub. Autodeploy on push to main. The whole thing took one Claude Code session. First green bar for the site itself.
Read more →Bought the domain. Connected Cloudflare. Deployed from GitHub. Auto-deploy on push to main. The whole thing took one Claude Code session. First green bar for the site itself.
Read full entryRan 50 users through Draft’s onboarding flow. 12% made it to their first generated message. The mic permission prompt kills momentum — 38% drop off right there. Need to rethink the flow: show value before asking for permissions.
Read full entryZero paid conversions in the first week at $9/month. Free tier (local transcription + Familiar Voices) is too good — nobody hits the paywall. The AI summaries aren’t compelling enough to upgrade for. Need to find a sharper wedge for paid.
Read full entryAfter 200+ accepted drafts from 10 beta users, Draft’s style matching hit 94% — measured by edit distance between generated and accepted text. The RLHF loop is working. Users are editing less over time, which means the learning flywheel is real.
Read full entryGUIs were always a compromise. Now that AI agents are real, the command line is back — not because developers are nostalgic, but because it's the only interface that actually works for AI.
The web won. For decades, if you wanted to build something people could use, you built a website. Point, click, scroll, tap — the graphical user interface became the universal language of software.
But here’s what nobody told you: GUIs were always a compromise.
They’re amazing for humans exploring unknown territory. Terrible for agents executing known tasks.
And now that AI agents are becoming real — actually useful, not just demos — we’re watching a quiet revolution happen. The command line is back. Not because developers are nostalgic. Because it’s the only interface that actually works for AI.
When you ask an AI agent to “check my calendar and find a free slot next Tuesday,” here’s what happens if it’s using a graphical interface:
This is insane.
We’re teaching billion-parameter models to do the equivalent of playing “Where’s Waldo” with every single interaction. And we wonder why agents are slow and unreliable.
Here’s what Obsidian CLI, WebMCP, and Model Context Protocol all have in common: they expose functionality as structured, documented functions instead of visual interfaces.
When you give an AI agent a function signature, you’ve given it what it does, what it needs, and what it returns. No ambiguity. No pixel-hunting. No “I think this button does what you want.”
This is why every major AI platform is rushing to build around these protocols. ChatGPT Actions, Claude’s MCP integration, Gemini Function Calling — they’re all betting on the same thing: structured tools beat unstructured interfaces for agent interaction.
The next generation of software won’t be designed for humans to navigate with mice. It’ll be designed for:
The companies winning the AI era will be the ones that provide both the human-friendly GUI and the agent-friendly CLI/API.
Every feature you build will eventually need a programmatic interface. Not because users will code against it directly. Because their AI assistants will.
The interface of the future is bimodal: beautiful GUIs for humans to explore and control, clean APIs for agents to execute with precision.
Read full entry