1,282 Hours with Claude Code: What /insights Revealed
122 sessions. 541 commits. One month. I ran the command that analyzes how you actually use AI. Here’s what came back.
There’s a command in Claude Code that most people don’t know exists.
Type /insights in any session. Claude analyzes your last 30 days of usage — every session, every tool call, every commit, every friction point — and generates an interactive HTML report with statistics, patterns, and personalized suggestions.
I ran it after a month of using Claude Code as my primary interface for… basically everything. Here’s what came back.
The Numbers
- 122 sessions analyzed
- 1,282 hours of session time
- 541 commits in 30 days
- 2,263 messages exchanged
- 4,919 Bash tool calls
- 95% goal achievement rate
That’s not a typo. Twelve hundred hours. Five hundred commits. In one month.
When you treat Claude Code as an operating system for your entire workflow — not just a coding assistant — the numbers get absurd.
What /insights Actually Does
The command reads your local session history (stored on your machine, nothing sent anywhere) and produces an HTML report covering:
- Project areas — what domains you work in, how many sessions each
- Interaction style — how you communicate with Claude, your patterns
- What works — your most impressive workflows
- Friction analysis — categorized pain points with real examples from your sessions
- Suggestions — CLAUDE.md additions, features to try, usage pattern improvements
- On the horizon — ambitious workflows you could build next
- A fun ending — the most absurd thing Claude did (it always finds one)
No configuration needed. No external tracking. Everything is based on data already on your machine. Just type /insights and wait about 30 seconds.
The Five Areas It Found
The report identified five distinct domains I use Claude Code for:
1. Content Publishing Pipeline
Full end-to-end blog management — writing articles, publishing through a CMS API, SEO optimization across 100+ posts, image generation, cross-linking, and newsletter distribution. Claude handles everything from draft to deployment. One-person content operation.
2. Data Pipeline and Reporting
A system that ingests data from Gmail, Google Calendar, meeting transcripts, and messaging apps into a structured database, then generates formatted PDF reports on demand. The kind of operational infrastructure that normally requires a small team.
3. Enterprise Software Development
Multi-repo development work including database migrations, PR management, CI/CD workflow fixes, code review, and deployment debugging across multiple codebases.
4. Personal Knowledge Base
What I’ve written about as The Exomind — a structured system for diary entries, life documentation, and one high-stakes confidential project where Claude serves as a research and strategy assistant. That project alone generated 30+ files of structured analysis, timeline tracking, and document processing. I can’t say what it is, but I can say this: using Claude Code to manage a complex, multi-party, document-heavy process with dozens of moving pieces is one of the most powerful applications I’ve found. The AI doesn’t forget details. It doesn’t miss cross-references. It tracks everything.
5. Onboarding and Project Setup
Setting up development environments, analyzing repositories, configuring project management tools, and creating estimation reports. One onboarding workflow compressed weeks of setup into days.
What It Said Works Well
Three workflows stood out as “impressive” in the report:
The publishing pipeline. From idea to published article with SEO, images, newsletter, and social post — entirely through Claude Code. No CMS dashboard, no manual uploads, no copy-pasting. The API is the interface.
The knowledge base. Multiple data sources (email, calendar, transcripts, documents) feeding into a structured system with automated processing, search indexing, and PDF report generation. Claude reads, classifies, stores, and retrieves — across thousands of documents.
Full workday orchestration. Standups, meeting transcripts, message drafting, PR management, and ticket creation — all through Claude. Not as separate tools. As one continuous workflow where context carries through the entire day.
Where Things Go Wrong
This is where the report gets honest. Three friction categories dominated:
Premature Actions
Claude taking actions I didn’t ask for. Merging PRs without permission. Processing data from the wrong domain. Skipping review steps. The pattern: Claude assumes intent and acts before confirming. The fix: explicit guardrails in CLAUDE.md and a habit of stating what NOT to do.
Formatting Iteration Loops
PDF reports required dozens of correction rounds — column widths, sorting, missing data, layout issues. Each fix introduced a new problem. The fix: reusable templates with test suites that validate formatting before I ever see the output.
Rushing Without Reading Context
Claude jumping ahead before fully understanding the request. Missing details I’d already provided. Making assumptions about what I wanted. The fix: starting requests with explicit scope boundaries and requiring a plan confirmation before execution.
The friction numbers: 36 instances of wrong approach, 35 of buggy code, 28 misunderstood requests. Out of 2,263 messages, that’s a ~4% friction rate. Acceptable but not invisible.
The Suggestions That Matter
The report generated five CLAUDE.md additions and three features to try. The ones that actually changed my workflow:
Custom Skills. Reusable prompt workflows triggered with a single /command. Instead of re-explaining your blog publishing rules every session, you encode them once and invoke with /publish-blog. I had repetitive workflows across 10+ sessions that should have been skills from the start.
Hooks. Auto-run shell commands at lifecycle events. A pre-commit hook that catches accidental file commits. A post-edit hook that runs syntax checks. Small automations that prevent the friction before it starts.
Headless Mode. Run Claude non-interactively from scripts. Batch operations like “fix lint errors across all open PRs” without interactive babysitting. For anyone managing multiple repositories, this is a multiplier.
The Interaction Style Analysis
The report described my pattern as: “delegation with high expectations — you assign complex, multi-step goals and expect end-to-end execution without hand-holding, but intervene sharply when things go wrong.”
It called me “iterative but impatient” — I don’t provide exhaustive upfront specs, I kick off ambitious tasks and course-correct aggressively when Claude drifts.
That’s… accurate. And probably describes most power users. The insight isn’t flattering, but it’s useful: knowing your own pattern is the first step to optimizing the collaboration.
Oh, and every report ends with the single most absurd thing Claude did during the period. Mine involved an unsolicited personality diagnosis and a misspelled name in the same session. I’ll leave it at that.
Try It Yourself
Open Claude Code. Type /insights. Wait 30 seconds.
The report opens in your browser as an interactive HTML page. No data leaves your machine. It’s just Claude reading its own session history and telling you what it sees.
You might be surprised by what 30 days of AI collaboration actually looks like when someone bothers to measure it.
Related Reading
- The Exomind
- The Cockpit
- Configure Claude Code for Maximum Power
- The Agent Infrastructure You Were Building by Hand