Cédric Rittié

← Back to blog
4 min
claude codetools

Claude Code is not just about code

You already use LLMs in a browser tab. Claude Code is what happens when you give AI access to your files, your terminal, and your tools.

Phase 1 · From prompt to workflow · Article 1 of 3

The glass ceiling of LLMs in the browser

ChatGPT, Claude, Gemini in a browser tab: it works. We all use them daily to write, summarize, analyze, brainstorm. The quality of the answers is there.

The problem is everything that happens around the answer:

  • Copying a doc into the conversation because the AI can't access your files
  • Copying the answer into another tool because the AI can't write anything on your end
  • Re-explaining the project context at the start of every new session
  • Getting a 12-step plan that you have to execute yourself

The AI does the intellectual work. You do the logistics. And that logistics eats up a considerable amount of time.

Claude Code removes that layer. Same model, same intelligence, but with access to your environment: files, terminal, web. It doesn't just answer anymore. It acts.

Claude Code vs ChatGPT: what changes

In a tab
Answer only
Claude Code
Answer + actions
=
Files
Read, write
+
Terminal
Execute
+
Web
Browse

Let's take a common case: analyzing a competitor's positioning before a product meeting.

In a browser tab, it goes like this: open the competitor's site, navigate around, take notes, paste the notes into ChatGPT, ask your questions, copy the summary into a doc. 30-40 minutes of back and forth.

With Claude Code:

Read the product brief in docs/brief-q3.md. Go to lemonsqueezy.com and paddle.com, analyze their positioning, pricing and messaging. Compare with our positioning in the brief. Write the synthesis in docs/competitive-benchmark.md.
Reading docs/brief-q3.md...
Navigating to lemonsqueezy.com...
Navigating to paddle.com...
Writing docs/competitive-benchmark.md...
Synthesis written. Lemon Squeezy targets solo creators with simplicity as their key argument. Paddle targets B2B SaaS with tax management. Your brief positions the product between the two but without a clear differentiator. I identified 3 angles in the document.

One prompt. Claude read the local file, visited two sites, compared with the brief, and wrote a document. The intellectual work AND the logistics.

This isn't a better ChatGPT. It's a different mode of use.

The mental model

In a browser tab, the AI is an isolated brain. It receives what you paste in, it responds, that's it. Everything else (reading your files, visiting a site, writing a document, running a command) you do by hand.

Claude Code connects that brain to your environment. Three connections:

  • Your files: Claude reads and writes directly to your disk. A CLAUDE.md at the root describes your project, and it reads it at every session. No more re-explaining context.
  • Your terminal: Claude doesn't tell you what to type. It executes. Start a server, deploy, install a tool. The difference between receiving a recipe and having someone who cooks.
  • The web: Claude visits sites, reads current content, analyzes what it sees. No approximate training memory. The competitor example above is exactly that.

Beyond files: Skills and MCP

Access to files and the terminal is the foundation. What makes Claude Code different is what you build on top.

Skills turn a work method into a command. A landing page audit you do regularly? You codify it once:

/audit-landing https://competitor.com
Loading Skill audit-landing...
Navigating to the site...
Headline: 7/10 — clear but not specific to the segment.
CTA: 5/10 — "Learn more" is weak, prefer a direct action.
Social proof: 3/10 — no testimonials above the fold.
Mobile: 8/10 — responsive is fine, load time 2.1s.

Same criteria, same format, every time. No need to rewrite the prompt.

MCP servers connect Claude to your existing tools. PostHog, Jira, Slack, Google Calendar, Confluence. Instead of checking a dashboard, you ask a question:

Which pages have the highest bounce rate this week?
Querying PostHog...
3 pages with the highest bounce rate:
1. /pricing (72%, avg time 18s) — visitors read but don't convert
2. /about (68%, avg time 12s) — quick exit, content probably not found
3. /blog/getting-started (61%, avg time 25s) — decent read time, no follow-up CTA

Claude queries PostHog, analyzes the data, and gives a diagnosis. Not a chart to interpret. An answer.

Skills and MCP each have their own article in this series. The point here is to see where it leads.

Three ways to access it

1

claude.ai/code

In the browser. Web access and file creation, no installation needed. The right place to try it out.

2

CLI (terminal)

The full version. Access to all files, command execution, Skills, MCP. This is where everything happens for this series.

3

VS Code

Via the extension. File changes appear in real time in the editor.

What's next in the series

ArticleWhat it unlocks
GitHub explainedSave, version, collaborate
The pipelineShip a project online in 5 minutes
SkillsYour work methods as reusable commands
CLAUDE.mdPermanent context, no more re-explaining
MCP serversClaude connected to PostHog, Jira, Slack...
AgentsDelegate complex tasks to sub-agents

The test

Take an analysis or writing task you have to do this week. Not a 3-line email. A brief, a competitive analysis, a meeting summary. The kind of task where context makes the difference.

Do it in Claude Code. Give 10-15 lines of context instead of 3. That's where the gap shows.

Try Claude Code
Same task, but with access to your files and the web.

Found this useful?

I publish a bimonthly synthesis of what I learn and build with AI. No spam.

The AI signals that matter, the workflows that work, the shortcuts nobody explains. Twice a month, in your inbox.

Related articles