Code to Context.
Instantly.
Generate structured, AI-optimized prompts from any GitHub repository. Feed your entire codebase to ChatGPT, Claude, or Gemini in seconds.
How it works
Three steps. Zero friction.
Connect your repo
Paste a GitHub URL or sign in to browse your private repositories directly.
Select your files
Pick exactly which files to include. Smart defaults exclude binaries, lock files, and build artifacts.
Generate & copy
The PMP engine runs in your browser via WebAssembly and assembles the prompt locally. Copy or download.
Web Application
Analyze public and private GitHub repositories directly from your browser. Zero installation. The entire prompt is generated locally — your code never touches our servers.
- Private repos via GitHub OAuth
- WASM engine — runs in your browser
- TXT, JSON, XML output formats
- Code stays 100% local
CLI Tool
A Go-based CLI for your local projects. Blazing fast, fully configurable, and scriptable — ideal for CI pipelines and large mono-repos.
- Parallel processing (multi-core)
- Smart .gitignore & binary filtering
- Dependency graphs (DOT format)
- TXT, JSON, XML, DOT output
Get started in seconds
Install the CLI
Available via Go, cURL, or Homebrew. Transform any local directory into an LLM-ready prompt in milliseconds.
Your code stays on your machine
The PMP engine is compiled to WebAssembly and runs entirely in your browser. File contents are fetched via an authenticated proxy and never stored. The generated prompt is assembled client-side and never transmitted to our servers.