I've been writing about AI-assisted development for a while now, but I wanted to get more concrete. So I pulled my Claude Code usage report — 282 messages across 56 sessions over 21 days — and dug into what's actually working, what isn't, and what I'd tell another developer who wants to get real value out of it.
This isn't a product review. It's a look at how I use Claude Code day-to-day as a senior frontend developer working with React, TypeScript, and Next.js.
The Numbers
Let me set the stage with some raw stats from my usage report:
- 282 messages across 56 sessions in 21 days
- +8,744 / -1,576 lines of code changed
- 133 files touched
- TypeScript dominated with 505 file interactions
- Most common task: bug fixing and debugging (35 out of 56 sessions)
That last one surprised me. I thought I was using Claude primarily for building features, but the data shows I lean on it most heavily for hunting bugs. And honestly? That's where it shines.
Workflow #1: The Jira-to-PR Pipeline
This is the workflow I'm most proud of. I built a custom skill that automates the entire path from a Jira ticket to a pull request. Here's how it works:
- I tell Claude to fetch a Jira ticket using the MCP integration
- Claude reads the ticket description and acceptance criteria
- It creates a feature branch, implements the code, and commits
- When I'm happy, it opens the PR and transitions the ticket in Jira
The real power move was scaling this up to epic-level execution. In one session, Claude worked through an entire epic — 5 separate tasks for a CRUD feature — implementing each one, committing separately, and transitioning all of them to Done. That's the kind of automation that makes you rethink how you plan sprints.
If you're building custom skills, handle integration edge cases directly in the skill definition. My Jira MCP integration sometimes returns empty results, so I added fallback logic that automatically retries with a JQL search. Claude then deals with these silently instead of stalling your session.
Workflow #2: Multi-Angle Bug Hunting
My most effective debugging sessions combine Claude's code analysis with external tools. The pattern looks like this:
- Point Claude at the bug (a Jira ticket, a screenshot, or just a description)
- Let it read and grep through relevant files to understand the data flow
- Have it use Playwright to reproduce the issue in the browser
- Apply a fix and immediately re-verify with Playwright
One session that stands out: I had a bug where objects weren't saving correctly. Claude went through multiple rounds of debugging before finally tracing it to a PascalCase vs camelCase mismatch between the frontend field names and the backend API response. The kind of subtle issue that could take hours to find manually.
Add a rule to your CLAUDE.md that says: "When fixing bugs, always verify the fix by testing with Playwright or running relevant tests — do not wait for the user to ask you to test." This single instruction dramatically improved my session outcomes.
Workflow #3: Exhaustive Refactoring
When I needed to switch a component's selection logic from name-based to ID-based keying across an entire codebase, Claude was invaluable. With 200 edit calls across my sessions and 15 successful multi-file change sessions, this is a pattern I use regularly.
The key lesson I learned the hard way: Claude sometimes misses references. In one refactor, it updated most usages but left behind two lines that still referenced the old pattern, causing a runtime crash.
When doing any rename or identifier change, explicitly ask Claude to grep the entire codebase for ALL usages before and after the change. Include this in your prompt:
Grep for ALL usages of the changed identifier across the entire
codebase, including types, tests, and constants. List all files.
Then make ALL changes atomically and verify none were missed.
It adds a few seconds but saves you from follow-up PRs.
Where Things Go Wrong
I'd be doing you a disservice if I only talked about the wins. Here's what the data shows about friction:
This was my number one pain point — 19 sessions had "wrong approach" friction where Claude started changing code before fully understanding the problem. In one pipeline debugging session, I had to interrupt it twice to say "check the MCPs first!" before it stopped guessing and actually analyzed the issue.
The fix is setting clear rules in CLAUDE.md:
Before editing any file, fully analyze the issue first.
Read relevant files, trace the data flow, and confirm the
root cause before making changes.
Flaky integrations kill sessions. Multiple sessions were completely blocked by API errors, auth failures, and MCP tool responses returning empty data. At least 4 sessions achieved nothing because of infrastructure issues. This isn't a Claude problem per se, but it's a real cost of working with integrated tools.
Sessions end before completion. Nearly half my sessions ended at partially or not achieved. The scope often grows beyond what a single session can handle. I've started breaking tasks into smaller, explicit milestones before starting, which helps me land more of them.
What I'd Set Up on Day One
If I were starting fresh with Claude Code today, here's what I'd configure immediately:
I do TypeScript heavily, and too many sessions involved iterating on build errors that could have been caught earlier. Setting up a post-edit hook that runs npx tsc --noEmit automatically means Claude catches type errors before they snowball.
A CLAUDE.md with debugging rules. The three rules I mentioned above — analyze before editing, test after fixing, grep exhaustively on refactors — would have saved me significant time across dozens of sessions.
Custom skills for repetitive workflows. The Jira-to-PR skill was a game changer. Any workflow you repeat more than three times is worth turning into a skill. It doesn't have to be perfect — you iterate on it just like you iterate on code.
The Honest Assessment
After 282 messages, my satisfaction breakdown looks like this: 51 sessions where I was likely satisfied, 19 fully achieved, but also 16 where I was dissatisfied and 7 that didn't achieve their goal at all. That's the real picture — AI-assisted development isn't magic, and it doesn't always work.
But the sessions where it does work? They're transformative. Implementing an entire Jira epic in one sitting. Tracing a subtle API field casing bug through five files in minutes instead of hours. Refactoring a codebase-wide identifier change with confidence.
The key is being intentional about how you use it. Set up guardrails, encode your patterns into reusable skills, and don't be afraid to interrupt when it's heading in the wrong direction. Claude Code is a powerful tool, but like any tool, the results depend on how deliberately you wield it.