Skip to main content

Leverage Record: March 3, 2026

AI Time Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Daily accounting of what Claude Opus 4.6 built today, measured against how long a senior engineer familiar with each codebase would need for the same work. This was a day dominated by education platform development, engineering metrics tooling, and build infrastructure improvements.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

The Numbers

# Task Human Est. Claude Leverage
1 Cross-platform desktop app Phase 2: shared UI library, activity components, demo migration (8 screens, 50+ files) 120 hours 45 min 160x
2 Engineering metrics dashboard: core package, CLI, DB migrations, CSV import (160 records), 4 design docs, 25 Python files 80 hours 22 min 218.2x
3 Engineering metrics dashboard: API server, React dashboard with interactive tooltips 80 hours 35 min 137x
4 Mobile app requirements document (18-section comprehensive spec) 24 hours 25 min 57.6x
5 Education platform: course mode + exam simulator (lesson synthesis engine, runtime personalization API, course UI, exam simulator, documentation; ~50 files across 5 repos) 480 hours 120 min 240x
6 Mobile app lesson/exam feature integration (6 tasks: types, API, persistence, 3 screens, course structure, app routing) 16 hours 10 min 96x
7 Runtime lesson personalization engine + documentation + certification lesson content generation 12 hours 30 min 24x
8 Engineering metrics dashboard: user and team management expansion 24 hours 16 min 90x
9 Lesson caching, content chunking, and adaptive learning toggle (11 files across 4 repos) 16 hours 6 min 160x
10 Simplify 85 technical specification diagrams (config fix, mechanical simplification, figure splitting, full validation) 24 hours 19 min 75.8x
11 visionOS immersive 3D experience (3 new files, 10 modified) 40 hours 45 min 53.3x
12 ML evaluation pipeline repair, documentation, and standardized test domain run 40 hours 53 min 45.3x
13 Build-time diagram rendering pipeline for technical specification figures 16 hours 25 min 38.4x
14 ML evaluation calibration: automated judge ordering (6 files) 8 hours 12 min 40x

Aggregate

Metric Value
Tasks completed 14
Human equivalent 980 hours (~24.5 work weeks)
Claude wall-clock 463 minutes (~7.7 hours)
Tokens consumed ~3,270,000
Weighted leverage factor 127.0x

Analysis

The second consecutive day above 100x weighted average. The primary driver was education platform development: the course mode and exam simulator implementation at 240x was the single largest task, producing a lesson synthesis engine, runtime personalization API, and full exam simulator with UI across five repositories. A human building this system from scratch, even with clear specs, would need twelve work weeks. Claude delivered it in two hours because each component followed patterns established by earlier components in the session.

The engineering metrics dashboard work clustered around 90x to 218x. The highest leverage came from the core package buildout at 218.2x: a CLI, database migrations, CSV import for 160 historical records, four design documents, and 25 Python files. This is the same pattern that drives high leverage factors consistently: well-structured, repetitive work with clear schemas. Once the data model and API patterns are established, each additional endpoint and migration is incremental.

The lowest leverage came from the runtime lesson personalization work at 24x and the diagram rendering pipeline at 38.4x. The personalization engine required iterative testing against actual lesson content, which added wall-clock time. The rendering pipeline involved integrating a new JavaScript library (beautiful-mermaid) into a Python build system, which required debugging cross-runtime issues.

The visionOS immersive experience at 53.3x was notable for being a spatial computing project. Building 3D environments and interaction models is inherently slower for AI because the feedback loop requires visual inspection that text-based agents cannot do. The human estimate is also lower than comparable 2D work because visionOS projects have less boilerplate.

Twenty-four and a half work weeks of output in a single day at 127x leverage. Every minute of Claude time replaced just over two hours of senior engineering work.


See all records under the Time Record tag.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.