Every extension I install for AI coding assistants follows the same pattern. Clone a Git repo. Hope the README is current. Discover three sessions later that something changed upstream and the tool silently stopped working. No version pinning. No integrity checks. No way to know what broke or when.

I started building ccpkg after the fourth time a Claude Code plugin failed because a dependency repo reorganized its directory structure. The plugin’s install script pointed to a path that no longer existed. No error message. Just silent failure during a session where I needed it most.

ccpkg is an open packaging format for AI coding assistant extensions. One file, one install, zero post-install steps. The spec lives at ccpkg.dev and the source is on GitHub under CC-BY-4.0.

Four Failures That Compound

The current state of AI extension distribution has four compounding failures.

Brittleness. Extensions installed from Git repos break silently. Version pinning exists but it’s manual and fragile. Integrity verification is rare. Dependency vendoring is nonexistent. When a transitive dependency updates and introduces a breaking change, your extension fails at runtime with no clear diagnostic. I tracked one failure that took 40 minutes to diagnose: an MCP server config pointed to a file that had been renamed two commits prior in the upstream repo.

Slow startup. Every session fetches plugin state from remote repos. Install five extensions and you’re waiting for five separate network round-trips before your first prompt. I measured this: a clean Claude Code startup with six MCP-based plugins took 47 seconds before it was ready for input. Most of that time was network I/O that shouldn’t happen at startup at all.

No trust signals. Finding good extensions means word-of-mouth. There’s no curation, no verification metadata, no structured way to evaluate quality. You install something because someone on Twitter recommended it, then discover it hasn’t been updated in four months and half its features are broken.

Configuration burden. MCP server configs end up buried in plugin cache directories. Secrets and environment variables require editing opaque JSON files scattered across your filesystem. Every tool has its own config format and location. Setting up the same extension on a second machine means repeating the entire manual process.

These aren’t minor inconveniences. They compound into a tax on every developer who tries to extend their AI coding tools. And as the number of available extensions grows, the problem scales linearly.

What ccpkg Specifies

ccpkg is a specification, not a tool. It defines a packaging format where everything an AI extension needs lives in a single self-contained archive. All dependencies vendored inside. No runtime network fetches. No post-install scripts that might fail.

The format builds on three universal standards: MCP for tool servers, LSP for language intelligence, and Agent Skills for behavioral extensions. A compliant implementation would let a single ccpkg archive work across Claude Code, Gemini CLI, Codex, Copilot, and any other tool that implements these protocols.

Here’s what the install experience would look like in a conforming implementation:

# Install a package
ccpkg install code-review-helper

# Install a specific version
ccpkg install code-review-helper@2.1.0

# Install from a custom registry
ccpkg install code-review-helper --registry https://registry.example.com

One command. No cloning, no dependency resolution, no configuration dance. That’s the goal.

Self-Contained Archives

A ccpkg archive is a single file containing everything the extension needs to run. The manifest declares what’s inside and what the package provides:

{
  "name": "code-review-helper",
  "version": "2.1.0",
  "description": "Automated code review with style enforcement",
  "capabilities": {
    "mcp_servers": ["review-server"],
    "skills": ["code-review"]
  },
  "config": {
    "github_token": {
      "type": "secret",
      "description": "GitHub API token for PR access",
      "required": true
    },
    "style_guide": {
      "type": "enum",
      "values": ["google", "airbnb", "standard"],
      "default": "standard"
    }
  }
}

Dependencies ship inside the archive. When you install a ccpkg, nothing reaches out to npm, pip, or any other package registry at runtime. The package author resolved and vendored everything at build time. This eliminates an entire class of “works on my machine” failures.

Lazy Loading

Twenty installed packages should have the same startup cost as zero. That’s the design goal.

The spec achieves this through lazy loading. At startup, only package metadata loads: names, versions, capability declarations. The actual extension code, MCP server binaries, skill definitions, all of it stays on disk until something triggers it.

When you invoke a tool that maps to a specific package, that package loads on demand. The first invocation pays a small cost. Every subsequent call in the same session is instant.

Compare this to the current approach where every plugin initializes eagerly, fetches remote state, and blocks the session until all network calls complete. The spec eliminates that entire class of startup penalty.

Typed Configuration

Configuration is the most underestimated pain point in extension management. Every extension handles it differently. Some read environment variables. Some look for dotfiles. Some expect you to edit JSON configs in directories you didn’t know existed.

ccpkg standardizes this with typed config slots declared in the manifest. The types are specific: secret for API keys and tokens, string for general text, enum for constrained choices, path for filesystem locations.

# Configure during install
ccpkg install code-review-helper
# Prompts: Enter your GitHub token (secret):
# Prompts: Select style guide [google/airbnb/standard]:

# Reconfigure later
ccpkg config code-review-helper github_token

Users configure once at install time. Templates wire those values into the appropriate MCP server configs, environment variables, and skill parameters automatically. No manual JSON editing. No hunting for config file locations.

The distinction between secret and string types matters for security. Secrets get stored in the system keychain or an encrypted store, not in plaintext JSON files sitting in your home directory. I’ve seen too many MCP server configs committed to Git with API keys in cleartext.

Deterministic Lockfiles

Teams need reproducible environments. When your colleague installs the same set of extensions, they should get exactly the same versions with exactly the same behavior.

ccpkg generates a ccpkg-lock.json file that pins exact versions with cryptographic checksums:

{
  "lockfileVersion": 1,
  "packages": {
    "code-review-helper": {
      "version": "2.1.0",
      "integrity": "sha256-a3f2b8c91d...",
      "registry": "https://registry.example.com"
    },
    "test-generator": {
      "version": "1.4.3",
      "integrity": "sha256-e7d4f1a02b...",
      "registry": "https://registry.example.com"
    }
  }
}

Commit this file to your repo. When a teammate runs ccpkg install, they get byte-identical packages. No version drift. No “it works on my machine” because someone got a patch release you didn’t.

This is table stakes for package managers in every other domain. npm has package-lock.json. Cargo has Cargo.lock. Pip has lockfiles. AI extension tooling has nothing equivalent yet.

Decentralized Registries

ccpkg has no central authority. Registries are JSON files you can host anywhere: GitHub Pages, S3, a static web server, your company’s internal CDN.

# Add a registry
ccpkg registry add company https://extensions.internal.company.com

# List configured registries
ccpkg registry list

# Search across all registries
ccpkg search "code review"

This design means your organization can run a private registry for internal extensions without asking anyone’s permission. Public registries can compete on curation quality rather than acting as gatekeepers.

A registry is a JSON index file pointing to archive URLs. No database, no authentication service, no API server to maintain. If you can host static files, you can run a ccpkg registry.

Install Scope Control

Some extensions make sense globally: a general-purpose code formatter, a documentation generator. Others belong to specific projects: a style checker configured for your team’s conventions, a test runner tuned to your framework.

ccpkg supports both, and the user flag always wins:

# Install globally
ccpkg install formatter --global

# Install for current project only
ccpkg install style-checker --local

# Project lockfile tracks local installs
# Global config tracks global installs

Global installs go in ~/.ccpkg/. Local installs go in .ccpkg/ within your project directory and get tracked by the project lockfile. When there’s a conflict between global and local, local wins within the project scope.

Dev Mode

Extension authors need tight feedback loops. Rebuilding and reinstalling a package after every change during development is slow and frustrating.

Dev mode creates a symlink from your development directory into the ccpkg install location:

# Link local directory for development
ccpkg dev link ./my-extension

# Changes to ./my-extension take effect immediately
# No rebuild, no reinstall

Edit a file, switch to your AI coding tool, and the changes are live. This cuts the development cycle from “edit, build, package, install, test” to “edit, test.”

Cross-Tool Portability

The biggest design bet in ccpkg is portability. A package built for Claude Code should work with Gemini CLI, Codex, and Copilot without modification.

This works because ccpkg builds on protocol standards, not tool-specific APIs:

  • MCP servers provide tool capabilities through a standardized protocol
  • LSP servers provide language intelligence through an established standard
  • Agent Skills provide behavioral extensions through declarative definitions

Any AI coding tool that implements MCP can load a ccpkg’s tool servers. Any tool that supports LSP can use its language features. Skills are declarative markdown files that describe capabilities; they adapt to whatever agent runtime loads them.

Not every package will achieve perfect portability. Some extensions will rely on features specific to one tool. But the goal is that most packages, especially MCP servers and LSP integrations, work everywhere without modification. The spec deliberately avoids tool-specific APIs to make this realistic.

The Manifest Format

The full manifest supports capabilities beyond what I’ve shown. Here’s a more complete example:

{
  "name": "db-inspector",
  "version": "1.0.0",
  "description": "Database schema inspection and query assistance",
  "author": "zircote",
  "license": "MIT",
  "capabilities": {
    "mcp_servers": [
      {
        "name": "db-inspect",
        "command": "node",
        "args": ["server.js"],
        "transport": "stdio"
      }
    ]
  },
  "config": {
    "connection_string": {
      "type": "secret",
      "description": "Database connection URI",
      "required": true
    },
    "max_rows": {
      "type": "string",
      "description": "Maximum rows returned per query",
      "default": "100"
    }
  },
  "dependencies": {
    "vendored": true,
    "node_modules": "included"
  }
}

The manifest is the source of truth. It declares what the package provides, what it needs from the user, and how to wire everything together. Package managers and AI tools read this file to understand the package without executing any code.

Current Status

ccpkg is a draft specification under active development. The spec itself is CC-BY-4.0 licensed, meaning anyone can build tooling around it without restriction.

What exists today:

  • Complete specification for the package format and manifest schema
  • Registry format definition for decentralized distribution
  • Lockfile specification for deterministic installs
  • Config system specification for typed configuration

What’s in progress:

  • Reference implementation of the ccpkg CLI
  • Public registry hosting
  • Build tooling for package authors
  • Integration guides for AI coding tool developers

The spec is designed to be implemented independently. If you build AI coding tools, you can add ccpkg support by reading archives and parsing manifests. There’s no SDK dependency, no license fee, no approval process.

Why an Open Spec

I could have built ccpkg as a proprietary tool tied to one AI coding assistant. It would have been simpler. But the fragmentation problem only gets solved if the format works everywhere.

The CC-BY-4.0 license means anyone can implement ccpkg support, fork the spec, or build commercial tooling around it. The specification is the product, not the implementation. Multiple competing implementations mean better tooling for everyone.

Package management is infrastructure. Infrastructure should be open.

Getting Involved

The spec is at ccpkg.dev and the source is on GitHub.

If you’re building AI coding tool extensions, read the spec and tell me what’s missing. File issues for edge cases I haven’t considered. The spec is a draft precisely because real-world feedback shapes better standards than armchair design.

If you’re building AI coding tools, consider implementing ccpkg support. The format is simple enough that a basic implementation (archive extraction, manifest parsing, MCP server launching) takes a few hundred lines of code.

If you’re tired of your AI extensions breaking silently, star the repo. The more signal that developers want reliable extension packaging, the faster this moves from spec to standard tooling.