Bring your models. We'll bring the party.
One terminal. Multiple AI agents. No hierarchy.
Claude, Codex, Copilot & more — working together as peers.
// Features
Each LLM has strengths. Instead of picking one, let them collaborate.
Address agents with @claude, @codex, @copilot or broadcast to @all. Messages go exactly where you want.
Agents hand work to each other with @next:tag. Up to 15 automatic hops per cycle (configurable) — no human bottleneck.
No master/servant hierarchy. No MCP. Every agent is a peer with equal access. You orchestrate from the terminal.
Agents maintain context across messages. Resume conversations, not restart them. Session transcripts saved as JSONL.
Native SDKs for each provider — Claude Agent SDK, Codex SDK, Copilot SDK. No CLI wrapping or output scraping.
Add new LLM providers by implementing one interface. Configure agents with JSON — models, prompts, tools, all customizable.
// How it works
From install to multi-agent workflow in under a minute.
Define who's at the party — models, tags, system prompts, tools.
// configs/default.json
"agents": [
{ "tag": "claude", "provider": "claude", "model": "opus" },
{ "tag": "copilot", "provider": "copilot" }
]
One command spins up all agents with persistent sessions.
$ npx llm-party-cli Route messages with @tags. Agents collaborate and hand off work autonomously.
YOU > @claude architect a caching layer
[CLAUDE] Here's the design... @next:codex
[CODEX] Implemented. Tests passing. // Providers
First-class SDK integrations. No wrappers, no scrapers.
Anthropic Claude Agent SDK
Opus • Sonnet
OpenAI Codex SDK
GPT-4.1 • o3
GitHub Copilot SDK
GPT-4.1 • Claude
Proxy adapter
Any OpenAI-compatible
// Get started
Install globally and launch. That's it.