Skip to main content

Leverage Record: February 27, 2026

AI Time Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Nineteen tasks today across three distinct workstreams: patent figure generation, cloud certification tooling, and technical writing. The patent work dominated in volume (11 tasks) while the infrastructure design document dominated in leverage factor.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

Task Human Est. Claude Time Tokens Leverage
LLM knowledge benchmark: repo scaffolding, test harness, 106 questions, 3 model benchmark runs 40h 30min 150k 80x
Hand-craft 6 patent figure SVGs for application E 6h 12min 25k 30x
Hand-craft 7 patent figure SVGs for application D 6h 12min 25k 30x
Hand-craft 7 patent figure SVGs for application C 8h 15min 25k 32x
Hand-craft 9 patent figure SVGs for application G 6h 12min 25k 30x
Hand-craft 6 patent figure SVGs for application A 8h 12min 45k 40x
Hand-craft 7 patent figure SVGs for application I 6h 12min 25k 30x
Hand-craft 7 patent figure SVGs for application J 8h 8min 25k 60x
Hand-craft 9 patent figure SVGs for application F 8h 12min 35k 40x
Hand-craft 7 patent figure SVGs for application H 6h 12min 25k 30x
Hand-craft 7 patent figure SVGs for application K 8h 15min 45k 32x
Generate 78 patent figure SVGs + 11 compiled figure PDFs 80h 25min 850k 192x
Add validation model integration + synthesize cloud certification content 4h 15min 50k 16x
Implement challenge generator with edge-case filtering and hybrid model config 8h 25min 150k 19.2x
Infrastructure design document (16 sections: VPC, ECS, S3, IAM, EventBridge, CloudWatch, CI/CD, SNS, Terraform, costs, security) 16h 8min 116k 120x
Legacy answer support: schema, scorers, question files, reporter, formatter, tests, 4 benchmark reruns 6h 20min 80k 18x
Draft 3,800-word technical article + AI detection scan + site CTA redesign + staging/production deploy 8h 25min 120k 19.2x
Stage-isolated pipeline with per-stage model override 8h 12min 40x
Standalone re-validation tool with batch mode and shared model instances 6h 15min 80k 24x

Aggregate Stats

Metric Value
Total tasks 19
Total human-equivalent hours 246h
Total Claude minutes 297min (4h 57min)
Total tokens ~1.87M
Weighted average leverage 49.7x

Analysis

The 192x leverage factor on the consolidated patent figure generation task stands out. That task involved converting 78 Mermaid diagrams into hand-crafted SVGs formatted for patent submission and compiling them into 11 separate PDF documents. A human patent illustrator would spend two full working weeks on that volume. Claude completed it in 25 minutes.

The infrastructure design document hit 120x. Sixteen sections covering every layer of a cloud-native export pipeline, from VPC topology to Terraform module structure to cost projections. Writing that document from scratch with proper architecture diagrams takes a senior engineer two full days. Claude produced it in 8 minutes.

The knowledge benchmark build (80x) involved creating a complete evaluation framework: Pydantic schemas, three scoring mechanisms (exact match, multiple choice extraction, LLM-as-judge), provider abstraction for multiple APIs, 106 AWS questions across 7 categories, and three full benchmark runs with result reporting. That is a week-long project compressed into 30 minutes.

The lower-leverage tasks (16x to 19x) involved more iterative work: challenge generation with tuning, article drafting with AI detection compliance, and benchmark refinement with reruns. These tasks require more back-and-forth judgment calls, which compresses less dramatically.

The stage-isolated pipeline task (40x) added per-stage model overrides to the certification synthesis system, allowing different AI models to be configured for each pipeline stage. Clean greenfield infrastructure work with a well-defined scope.

The standalone re-validation tool (24x) extracted validation logic into an independent script with batch mode and shared model instances, allowing targeted re-runs without full pipeline execution.

A 49.7x weighted average means roughly six weeks of senior engineering output in under five hours of wall-clock time across nineteen tasks.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.