Open Source • MIT Licensed

llm-party

Bring your models. We'll bring the party.

One terminal. Multiple AI agents. No hierarchy.
Claude, Codex, Copilot & more — working together as peers.

Get Started View on GitHub
llm-party — 3 agents active
YOU > @claude review this function
[CLAUDE] The error handling on line 42 silently swallows exceptions...
YOU > @codex fix what claude found
[CODEX] Fixed. Added proper error propagation with context.
YOU > @copilot write tests for the fix
[COPILOT] Added 3 test cases covering the new error paths. All passing.

Why throw a party?

Each LLM has strengths. Instead of picking one, let them collaborate.

@

Tag-Based Routing

Address agents with @claude, @codex, @copilot or broadcast to @all. Messages go exactly where you want.

Agent-to-Agent Handoff

Agents hand work to each other with @next:tag. Up to 15 automatic hops per cycle (configurable) — no human bottleneck.

Peer Architecture

No master/servant hierarchy. No MCP. Every agent is a peer with equal access. You orchestrate from the terminal.

Persistent Sessions

Agents maintain context across messages. Resume conversations, not restart them. Session transcripts saved as JSONL.

</>

Direct SDK Integration

Native SDKs for each provider — Claude Agent SDK, Codex SDK, Copilot SDK. No CLI wrapping or output scraping.

Extensible Adapters

Add new LLM providers by implementing one interface. Configure agents with JSON — models, prompts, tools, all customizable.

Three steps to the party

From install to multi-agent workflow in under a minute.

1

Configure your agents

Define who's at the party — models, tags, system prompts, tools.

// configs/default.json
"agents": [
  { "tag": "claude", "provider": "claude", "model": "opus" },
  { "tag": "copilot", "provider": "copilot" }
]
2

Launch the party

One command spins up all agents with persistent sessions.

$ npx llm-party-cli
3

Talk to your team

Route messages with @tags. Agents collaborate and hand off work autonomously.

YOU > @claude architect a caching layer
[CLAUDE] Here's the design... @next:codex
[CODEX] Implemented. Tests passing.

The guest list

First-class SDK integrations. No wrappers, no scrapers.

Claude

Anthropic Claude Agent SDK
Opus • Sonnet

Codex

OpenAI Codex SDK
GPT-4.1 • o3

Copilot

GitHub Copilot SDK
GPT-4.1 • Claude

GLM

Proxy adapter
Any OpenAI-compatible

Join the party

Install globally and launch. That's it.

$ npm install -g llm-party-cli click to copy
$ llm-party launch
View on GitHub