Skip to main content
Time Record 1 MIN READ MAR 15, 2026

Leverage Record: March 15, 2026

Two tasks on a spring break Saturday. I carved out about an hour and a half of computer time to get the content synthesis pipeline running and generate interactive labs for the free tier. Minimal supervisory effort: eig…

Two tasks on a spring break Saturday. I carved out about an hour and a half of computer time to get the content synthesis pipeline running and generate interactive labs for the free tier. Minimal supervisory effort: eight minutes of prompting produced a week and a half of human-equivalent output.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

#TaskHuman Est.ClaudeWeeksFactorSup. Factor
1Generate 66 interactive labs across 35 educational domains40h35m1.0w68.6x800.0x
2Set up content synthesis pipeline and synthesize 35 educational domains (environment fixes + batch processing + quality review + lesson generation)20h50m0.50w24.0x240.0x

Aggregate Stats

MetricValue
Total tasks2
Human-equivalent hours60h (7.5 working days)
Claude wall-clock time85m (1.4h)
Supervisory time8m
Tokens consumed~595,000
Weighted avg leverage factor42.4x
Weighted avg supervisory factor450.0x

Analysis

The lab generation task at 68.6x carried most of the leverage. Once the domain specifications and lab framework exist, generating 66 labs across 35 domains is pure content stamping. The AI reads each domain's goal hierarchy and produces labs that exercise the leaf nodes. Three minutes of prompting, 35 minutes of execution, and the result would take a curriculum developer a full work week.

The pipeline setup task at 24.0x was lower because it involved debugging: fixing a virtual environment, resolving dependency conflicts, patching a batch script, and then running the actual synthesis and quality review passes. Environment troubleshooting is where AI leverage drops. The machine is fast at generating code but still has to iterate through build-fix-retry cycles just like a human would, only faster.

The supervisory leverage of 450.0x is the standout number. Eight minutes of my time on a Saturday afternoon turned into seven and a half days of engineering output. Spring break barely noticed.