Applied AI

Claude Code's Source Was Sitting in a Public Package This Whole Time

Anthropic built one of the most capable AI coding tools available, then distributed it via npm — a fundamentally public registry. Researchers discovered the minified source could be deobfuscated, revealing Claude Code's internal architecture, tool implementations, and prompting strategies. Here's what happened, what was found, and what it means for your business.

| Category: AI Security

Claude CodeAnthropicnpmAI SecuritySource CodeDeveloper ToolsEnterprise AICybersecurity
Claude Code's source code made public — what was inside the package

Anthropic built one of the most capable AI coding assistants on the planet. They distributed it as an npm package. And then — because of how npm fundamentally works — anyone who knew where to look could download the package, crack it open, and start reading the source code. No hacking required. No zero-day exploit. Just a package manager doing exactly what package managers do.

This is the story of how Claude Code's internals became an accidental open-source project, what the community found inside, and what it means for the companies building proprietary AI products on public infrastructure.

Wait — What Even Is npm, and Why Does This Matter?

If you're a developer, skip this bit. If you're a business or IT leader, this context is everything.

npm (Node Package Manager) is the world's largest software registry — a massive public library where developers share JavaScript and TypeScript packages. It powers the modern web. Over 2 million packages live there, downloaded billions of times a day. When Anthropic wanted to distribute Claude Code as a command-line tool for developers, npm was the obvious choice. You run npm install -g @anthropic-ai/claude-code, and within seconds you've got Claude Code running on your machine.

Here's the thing about npm: it's a public library. When you publish a package there, its contents — all its files — are publicly downloadable by anyone. That's the point. The registry exists to share code. The assumption, historically, is that if you're publishing to npm, you want people to be able to use (and read) your code.

Anthropic did not want people to read their code. But they published to npm anyway, because that's how you distribute a developer CLI tool. This tension — proprietary software distributed through a transparency-first platform — is where the story begins.

How npm works: publishing 'private' code to a fundamentally public registry

How the Code Became Readable

Anthropic's npm package contained minified JavaScript — code that had been compressed and had its variable names scrambled to make it harder to read. This is standard practice for protecting proprietary JavaScript. It's not encryption. It's more like writing your diary in a small, cramped handwriting: inconvenient to read, but not actually secure.

Modern JavaScript decompilation tools — deobfuscators and prettifiers — are very good at reversing minification. Researchers and curious developers downloaded the Claude Code npm package, ran the minified bundle through decompilation tools, and recovered code that was substantially readable. The underlying TypeScript source structure — the functions, the logic flow, the internal architecture — became apparent.

Additionally, npm packages often contain more than just the production bundle. Build artefacts, source maps (which are essentially a roadmap back to the original source), and configuration files sometimes make it into published packages inadvertently. Source maps in particular are a common culprit — they're essential for debugging during development and easy to forget to exclude before publishing.

The result: Claude Code's internal workings were visible to anyone with a few hours and the right tools.

What Was Actually Found

Security researchers and developers who examined the extracted code reported finding several categories of interesting material:

The tool implementation architecture. Claude Code is built around a set of "tools" — discrete capabilities like reading files, writing code, running bash commands, searching the web. The actual implementation of these tools — how they're constructed, how they communicate with Claude's API, how errors are handled — became visible. This is the kind of internal engineering detail Anthropic almost certainly wanted to keep proprietary.

Internal prompting strategies. Beyond the main system prompt (which was a separate earlier leak), the source code contained the scaffolding that constructs prompts dynamically — how Claude Code assembles context from your codebase, what it includes and excludes, how it handles different types of programming tasks. This is hard-won product wisdom encoded in code, built through extensive iteration.

Hardcoded behaviours and heuristics. Logic governing when Claude Code decides to ask a clarifying question versus proceed, how it handles large files, its approach to different programming languages — these implementation choices reflect Anthropic's engineering judgement about what makes an effective AI coding tool.

API integration patterns. How Claude Code calls Anthropic's own API, how it manages context windows, how it handles rate limiting and errors — all of this became visible. Not credentials (those aren't in the source), but the patterns and strategies that make the product work.

The Security Angle: What This Is, and What It Isn't

Security researchers discover Claude Code's internals are publicly accessible

Let's be clear about what this incident is not: it is not a data breach. No user data was exposed. No API keys or credentials were leaked (those live server-side at Anthropic, not in the client package). No one's Claude conversations are at risk. The exposure is of Anthropic's own intellectual property — their source code — not of anything belonging to users.

But that doesn't mean there are no security implications. A few worth thinking through:

Attack surface visibility. When you can read an application's source code, you can identify its logic more efficiently. Security researchers — and less well-intentioned people — can spot edge cases, error handling gaps, and potential manipulation points that would be much harder to find through black-box testing alone. This is why security professionals care about source code confidentiality even when no user data is involved.

Competitive intelligence. The engineering decisions baked into a product like Claude Code represent significant investment. Competitors can learn from Anthropic's implementation choices without spending the years it took to arrive at them. This is the classic open-source dilemma, except Anthropic didn't choose it.

Prompt injection opportunities. Understanding exactly how Claude Code constructs its context gives sophisticated users — and adversaries — a clearer map for attempting to manipulate the model's behaviour through carefully crafted inputs.

The Harder Question: Can You Really Keep Code Secret in a Public Package?

Honestly? No — not reliably. And this is a lesson that extends far beyond Claude Code.

The JavaScript ecosystem was built on openness. The web runs on JavaScript, JavaScript is inherently client-side (meaning it runs on your machine, not on a server), and the entire culture of npm is one of sharing. Trying to keep JavaScript secret is like trying to have a private conversation in a crowded hawker centre — you can lower your voice, but you can't control who's listening.

The alternatives Anthropic had — and that any company in this position has — are:

  • Full server-side architecture: Keep all the proprietary logic running on Anthropic's servers, with the npm package being just a thin client that sends requests. This limits functionality (no offline mode, higher latency) but keeps the IP protected.
  • WebAssembly: Compiled WASM is significantly harder to reverse-engineer than JavaScript. Not impossible, but the effort required is substantially higher.
  • Native binaries: Distribute a compiled executable rather than a JavaScript package. Reverse engineering is harder, distribution is more complex.
  • Accept it and go faster: Treat the code as effectively public, focus on the data and model advantages that can't be extracted from a package, and out-innovate rather than out-obscure.

The last option is increasingly what sophisticated software companies are choosing. Your moat in AI isn't your JavaScript. It's your model, your data, your feedback loops, and your ability to ship faster than anyone reverse-engineering you can copy.

What This Means for Businesses Using Claude Code

If your developers are using Claude Code, here's your practical takeaway list:

Your data is not exposed. The code your team writes, the prompts they send, the conversations with Claude — none of that is in the npm package. That data goes to Anthropic's API, which operates on their secured infrastructure. This is a source code exposure, not a data breach.

The tool still works exactly as it did. The fact that someone can read how Claude Code is implemented doesn't change its capabilities or its reliability. Your developers can keep using it.

Review your own npm hygiene. If your organisation publishes npm packages — internal or external — this is a good moment to audit what's in them. Source maps, configuration files, environment variable references, hardcoded staging endpoints — these show up in npm packages constantly. A quick npm pack followed by inspecting the tarball contents before publishing is a habit worth building.

Think about where your IP actually lives. If your competitive advantage is in proprietary algorithms distributed as JavaScript packages, you have an architecture problem that's independent of anything Claude Code did. Consider whether your most valuable logic belongs on the client at all.

Anthropic's Response and the Road Forward

Anthropic has not made a dramatic public statement about this — which is itself a kind of statement. The company's public positioning has consistently emphasised the primacy of their model capabilities over their application layer code. In that framing, the source code of a CLI tool is not the crown jewel; the model is.

That said, subsequent versions of Claude Code have shown increased attention to what is and isn't included in the published package — tighter build processes, cleaner separation between client and server concerns, and more deliberate choices about what proprietary logic runs locally versus remotely.

The incident will not be the last of its kind. As AI companies race to build developer tools, CLI utilities, and IDE extensions — all distributed through public package registries — the tension between proprietary implementation and open distribution infrastructure is going to surface again and again. The npm ecosystem in particular was simply not designed for the era of companies trying to keep AI implementation details secret while simultaneously handing every developer on the planet a local copy of their product.

It's a good problem to have, in one sense. It means your tool is popular enough to be worth examining. But it's a problem that requires a deliberate architectural answer — not just better minification.


Applied AI helps Malaysian organisations assess the security posture of their AI tool stack — including understanding what data leaves your environment, what proprietary information might be exposed through your own software distribution choices, and how to build AI-powered products with appropriate architectural safeguards. If any of this hit close to home, let's talk.

Ready to Transform Your Business with AI?

Contact Applied AI today to discuss how we can help you leverage artificial intelligence for competitive advantage.

Get Started Today