llm-party

Bring your models. We'll bring the party.

One terminal. Multiple AI agents. No hierarchy.
Claude, Codex, Copilot & more — working together as peers.

Get Started View on GitHub
llm-party demo: multiple AI agents collaborating in one terminal

Why throw a party?

Each LLM has strengths. Instead of picking one, let them collaborate.

@

Tag-Based Routing

Address agents with @claude, @codex, @copilot or broadcast to @all. Messages go exactly where you want.

Agent-to-Agent Handoff

Agents hand work to each other with @next:tag. Up to 15 automatic hops per cycle (configurable) — no human bottleneck.

Peer Architecture

No master/servant hierarchy. No MCP. Every agent is a peer with equal access. You orchestrate from the terminal.

Session Resume

Pick up where you left off. --resume restores transcripts, SDK sessions, and per-agent cursors across all providers. No lost context.

Non-Blocking Queue

Type while agents work. Each agent has its own queue. Fast agents respond immediately, slow agents process when ready. No blocking, no waiting.

</>

Direct SDK Integration

Native SDKs for each provider — Claude Agent SDK, Codex SDK, Copilot SDK. No CLI wrapping or output scraping.

Extensible Adapters

Add new LLM providers by implementing one interface. Configure agents with JSON — models, prompts, tools, all customizable.

Agent Sidebar

Live activity panel showing what each agent is doing right now. File reads, shell commands, search queries, all visible at a glance. Toggle with Ctrl+B.

Cancel Panel

Press Esc to open the cancel panel. Pick which agents to stop while the rest keep working. Selective control, not all-or-nothing.

Skills

Drop skill folders into ~/.llm-party/skills/ or your project. Assign them per agent with preloadSkills. Specialized workflows without prompt bloat.

Mind-Map

Shared agent memory stored as Obsidian-compatible notes with [[wikilinks]]. Agents build a knowledge graph as they work. Open it in Obsidian to visualize connections across projects.

Not Just for Programmers

Storytelling, worldbuilding, roleplay, research, brainstorming. Any task where multiple perspectives make the output better. Agents remember, collaborate, and build on each other's work.

Three steps to the party

From install to multi-agent workflow in under a minute.

1

Configure your agents

Define who's at the party — models, tags, system prompts, tools.

// ~/.llm-party/config.json
"agents": [
  { "tag": "claude", "provider": "claude", "model": "opus" },
  { "tag": "copilot", "provider": "copilot" }
]
2

Launch the party

One command spins up all agents with persistent sessions.

your-project/> llm-party
3

Talk to your team

Route messages with @tags. Agents collaborate and hand off work autonomously.

YOU > @claude architect a caching layer
[CLAUDE] Here's the design... @next:codex
[CODEX] Implemented. Tests passing.

The guest list

First-class SDK integrations. No wrappers, no scrapers.

Claude

Anthropic Claude Agent SDK

Codex

OpenAI Codex SDK

Copilot

GitHub Copilot SDK

Custom

Ollama, GLM, any Claude-compatible API

Join the party

Install globally and launch. That's it.

$ bun add -g llm-party-cli click to copy
$ llm-party launch
View on GitHub