Automated TDD with Claude Code: Testing Strategy for AI-Assisted Engineering
Every project I hand to Claude Code starts the same way: I write the testing strategy before the first line of application code exists. Not because I am a TDD purist (I have skipped tests on personal projects like anyone else), but because I learned the hard way that an AI agent without test constraints will produce code that works today and breaks tomorrow. The agent is fast, confident, and has zero memory of what it built yesterday. Tests are the only thing that survives between sessions and keeps the codebase honest.
The Leverage Factor, Part 2: Defending the Numbers
The Leverage Factor: Measuring AI-Assisted Engineering Output generated more direct messages than anything else I have published. Some of the feedback was enthusiastic. A significant portion was hostile. "Exaggerated." "False." "No way those numbers are real." Fair enough. I published extraordinary claims with data but without enough context for readers to evaluate the methodology. This article fills that gap. I am going to take specific time records, break them apart, defend the human estimates with engineering detail, and then show that the original leverage calculation actually understates the real multiplier.
Agentic Coding, FOMO, and Flow State Addiction
Last Monday I went into my office at 7 AM to kick off a few Claude sessions before taking the trash cans to the street. I sat down, wrote three prompts, and started reviewing the first batch of output. At noon I looked up and realized the garbage truck had come and gone four hours ago. I had not eaten breakfast. I had not taken the trash out. I had been sitting in the same chair for five hours without standing up, and the only reason I noticed was that someone texted to ask if I was still alive. The work was going so well that stopping felt physically wrong.
Agentic Coding and Decision Fatigue: The Cognitive Cost of Supervising AI
Recently during heavy Claude Code usage, I started noticing an uncomfortable trend. At 8 AM I could run three agent sessions at once, spot a bad abstraction in a 200-line diff, and push back on architectural shortcuts without hesitation. By 3 PM the same work felt like wading through concrete. My prompts got sloppy. I started approving diffs I would have questioned six hours earlier. Twice I caught myself closing a session just to avoid making a decision about it. Once I even prompted the following: "I know you can do better than this. Be thorough and just get it done, bro." The work had not gotten harder. My interest had not faded. I wanted to understand what had changed between 8 AM and 3 PM inside my skull.
Giving Claude Code a Voice with ElevenLabs
I spend hours in Claude Code every day. Long sessions where I am reading, thinking, switching contexts, and occasionally glancing at the terminal to see if the agent finished a task. The problem: Claude Code is silent. It finishes a 10-minute build-and-deploy pipeline and just sits there, cursor blinking, waiting for me to notice. The whole concept here was inspired by J.A.R.V.I.S. from the Iron Man films, voiced by Paul Bettany. Tony Stark's AI assistant announces status, flags problems, and delivers dry commentary while Stark works on something else entirely. I wanted that. An AI assistant that speaks. That announces when it starts a task and summarizes what it accomplished when it finishes. Like a competent colleague who taps you on the shoulder and says "that deployment is done, here's what happened."
The Leverage Factor: Measuring AI-Assisted Engineering Output
In finance, leverage is the use of borrowed capital to amplify returns. A trader with 10x leverage controls ten dollars of assets for every dollar of equity. The principle is straightforward: a small input controls a disproportionately large output. The same principle now applies to software engineering, and the ratios are significantly higher than anything a margin account offers.
Single Serving Applications - The Clones
I'm systematically replacing my SaaS subscriptions with Single Serving Applications. These are purpose-built, AI-generated apps designed for an audience of one. Each clone is built by Claude Opus 4.6 from a requirements document, runs via Docker Compose, and costs essentially nothing to operate.
Ephemeral Apps Are Almost Here
I recently built a Harvest clone in 18 minutes, a Trello clone in 19 minutes, and a Confluence clone in 16 minutes. All three were generated entirely by Claude Opus 4.6 from requirements documents. All run in Docker. All work.
Overlooked Productivity Boosts with Claude Code
Most engineers who adopt Claude Code start with the obvious: "write me a function," "fix this bug," "add a test." Those are fine. They also miss at least half the value. The largest productivity gains come from activities engineers either do poorly, skip entirely, or never consider delegating. After months of tracking leverage factors across every task I give Claude Code, the data reveals where the real multipliers hide. Surprisingly few involve writing application code.