AI Jungle
ProductsInsightsResourcesHow We WorkAbout
Book a Call →
AI Jungle

Custom AI agents, consulting infrastructure, and autonomous systems.

[email protected]
Book a Call →

Services

  • Tensor Advisory →
  • MAIDA
  • All Services

Content

  • Field Notes
  • Products
  • Resources
  • Newsletter

Company

  • About
  • How We Work
  • Book a Call
  • Privacy
  • Terms

© 2026 AI Jungle.

  1. Home
  2. /Field Notes
  3. /We Had 290 Capabilities in Claude Code — Here's What We Actually Needed
AI & Productivity6 min readMarch 22, 2026

We Had 290 Capabilities in Claude Code — Here's What We Actually Needed

By AI Jungle

We audited our Claude Code setup: 290 capabilities, 24 plugins, 7 code review methods. Here's how we cut it to what matters — and built a tool so you can too.

We Had 290 Capabilities in Claude Code — Here's What We Actually Needed

Key Takeaways

  • A typical Claude Code power user has 200-300 capabilities across built-in tools, plugins, skills, commands, hooks, and MCP servers — most are unused
  • We found 7 different ways to review code, 3 ways to search the web, and 39 phantom MCP tools consuming context window
  • Our MEMORY.md was 247 lines — silently truncated at 200, meaning Claude lost context every session
  • After audit: 24 plugins → 13, 7 review methods → 3, MEMORY.md 247 → 78 lines, composite score C → B+
  • The audit process is now a free plugin: Claude Code Optimizer

We audit Claude Code setups for a living. The first setup we audited was our own.

The finding: 290 total capabilities across 28 built-in tools, 24 plugins, ~80 skills, 27 custom commands, 23 subagent types, 5 MCP servers, and 3 hooks.

The problem: we were actively using about 30 of them. The other 260 were eating context window, creating confusion, and in one case, actively breaking our workflow.

Here's the full breakdown — and the process we now offer as a service.

The Audit: What We Found

1. Plugin Bloat: 24 → 13

We had installed every "interesting" plugin on day one. Classic developer move.

11 plugins disabled:

  • typescript-lsp — our main project isn't TypeScript
  • code-review + code-simplifier — redundant with pr-review-toolkit and gstack's /review
  • security-guidance — passive tips, low value vs context cost
  • agent-sdk-dev, ralph-loop — never used
  • sentry — no active project yet
  • claude-code-setup, claude-md-management — one-time setup tools

Impact: Fewer skills loaded per session means faster startup, cleaner tool list, and less "which review tool do I use?" confusion.

2. The 7 Code Review Problem

We had seven different ways to review code:

  1. /CodeReview (custom command)
  2. /code-review (plugin skill)
  3. /review (gstack skill)
  4. superpowers:code-reviewer (subagent)
  5. feature-dev:code-reviewer (subagent)
  6. pr-review-toolkit:code-reviewer (subagent)
  7. pr-review-toolkit:review-pr (skill)

Seven tools, zero decision tree. Every review started with "which one do I use?"

The fix was simple:

  • Quick sanity check → superpowers reviewer (auto-triggered)
  • About to merge → /review (gstack, diff-based, fast)
  • Complex PR → review-pr (multi-agent, thorough)

The other four? Deprecated or disabled.

3. MEMORY.md: The Silent Truncation Bug

Claude Code's memory system has a 200-line limit on MEMORY.md. Ours had 247 lines.

That means the last 47 lines were invisible to every new session. Those lines contained our Partner Copilot details, digital product pricing, and Company OS notes — context that Claude kept asking us to repeat.

The fix: split MEMORY.md into 8 topic files, keep the index at 78 lines. Every session now loads the full picture.

4. The Hook That Broke Its Own Rule

Our auto-commit hook was designed to commit changes after every edit. Sensible.

But our workflow rule was: "NEVER commit directly to main. Always use feature branches."

The hook didn't check which branch it was on. It also used --no-verify, bypassing any pre-commit safety checks.

Two-line fix: add a branch guard (skip if on main/master) and remove --no-verify.

5. Phantom MCP Tools

We had 5 MCP servers connected: Canva (31 tools), Excalidraw (5), Gamma (4), Granola (4), GitHub (40+).

Canva, Gamma, and Granola? Never used. That's 39 phantom tools adding noise to every session's tool list without providing any value.

The 8-Dimension Scoring Framework

After doing this manually, we built a systematic scoring system:

Dimension What We Check Before After
Coverage Right plugins for your stack 8/10 8/10
Redundancy Overlapping capabilities 4/10 8/10
Config Hygiene Conflicts, stale config 5/10 9/10
Memory Health MEMORY.md under limit 3/10 9/10
Workflow Full session lifecycle 9/10 9/10
Commands Quality and relevance 7/10 7/10
MCP Utilization Connected = used? 4/10 8/10
Security Permissions, hooks, secrets 6/10 9/10

Before: C (62%). After: B+ (84%).

The Process (Now a Plugin)

We packaged this into a Claude Code plugin: Claude Code Optimizer.

Run /audit and it:

  1. Collects all config files automatically
  2. Inventories every capability (tools, plugins, skills, commands, subagents, MCP, hooks)
  3. Scores 8 dimensions on a 0-10 scale
  4. Maps overlaps with decision trees
  5. Generates prioritized recommendations (P0 through P3)
  6. Saves a report with copy-paste config fixes

It's free to install and run. For teams that want hands-on optimization, we offer Standard ($79) and Premium ($199) tiers.

Quick Wins You Can Apply Today

Even without the plugin, check these three things right now:

1. MEMORY.md line count

Run wc -l on your MEMORY.md. Over 200? You're losing context every session. Split detailed content into topic files and keep the index lean.

2. Unused MCP servers

Count your connected MCP tools. Are you using all of them? Each unused server adds dozens of phantom tools to your context window.

3. Hook conflicts

Read your hooks alongside your workflow rules. Does your auto-commit hook check which branch it's on? Does it bypass pre-commit checks with --no-verify?

Install

The plugin is MIT-licensed and free:

git clone https://github.com/B-AI-bot/claude-code-optimizer

Enable it in your project settings, then run /audit.

Your setup is more powerful than you think. It just needs organizing.

Frequently Asked Questions

Will the audit modify my Claude Code config?

No. The audit is read-only by default. It only applies changes when you explicitly approve each one. You stay in full control.

Does it work with any project type?

Yes. It detects your stack (Node, Python, Rust, Go, Ruby) and scores coverage accordingly. A Python data science project is scored differently than a Next.js web app.

What if I only have a few plugins?

Even a minimal setup benefits. The audit checks workflow gaps, memory health, and security — not just plugin count. Some of the biggest wins come from adding missing tools, not removing extras.

How long does the audit take?

Under 2 minutes for the automated scan. Reading the report and applying fixes takes 10-15 minutes for a typical setup.

Is my config data sent anywhere?

No. Everything runs locally in your Claude Code session. For the paid tiers, you choose what to share with us.

Get weekly AI insights for business leaders

Loading newsletter signup...


← All field notesBook a Strategy Call →

Keep Reading

What Is a Personal AI Agent and Why Every Executive Needs One in 2026
AI & Productivity

What Is a Personal AI Agent and Why Every Executive Needs One in 2026

The gap between executives who use AI as a chatbot and those who deploy AI as a personal agent is the defining competitive advantage of 2026.

The Best AI Tools for Small Businesses in 2026 (With Real Pricing)
AI & Productivity

The Best AI Tools for Small Businesses in 2026 (With Real Pricing)

A field-tested breakdown of every major AI tool for small businesses in 2026 — Claude, ChatGPT, Gemini, Grok, Perplexity, DeepSeek — with real pricing, stack recommendations, and honest ROI analysis.

The AI Agency Model: How a 2-Person Team Outperforms a 20-Person Consultancy
Business

The AI Agency Model: How a 2-Person Team Outperforms a 20-Person Consultancy

Real numbers, real deliverables. How we run an AI consulting agency with 2 humans and AI agents, and why the traditional consulting model is about to break.