# llm-party > Peer orchestrator that puts multiple AI agents in the same terminal. You talk, they listen. They talk to each other. Nobody is the boss except you. ## The Problem Developers juggle multiple AI tools in separate windows. Claude in one terminal, Codex in another, Copilot somewhere else. No shared context. No collaboration. Copy-paste is the "integration." ## The Solution llm-party connects Claude, Codex, Copilot, and any Claude-compatible API (Ollama, GLM, etc.) in a single terminal session. Every agent sees the full conversation. They hand off work to each other automatically. You route with @tags. ``` YOU > @claude review this function [CLAUDE] The error handling on line 42 silently swallows exceptions... YOU > @codex fix what claude found [CODEX] Fixed. Added proper error propagation with context. YOU > @copilot write tests for the fix [COPILOT] Added 3 test cases covering the new error paths. All passing. ``` ## How It Works - Peer architecture: no master/servant hierarchy, no MCP. All agents are equals. - Direct SDK integration: official published SDKs from each provider. Nothing reverse-engineered, patched, or bypassed. - Persistent sessions: agents maintain context across messages. Resume with --resume or /resume. Per-agent SDK session IDs and message cursors preserved in a manifest file. - Non-blocking queue: each agent has its own processing queue. You can type while agents work. Fast agents respond immediately, slow agents queue and process when ready. - Shared context: every agent sees only unseen messages since its last turn. No duplicate processing on resume or concurrent dispatch. - Agent-to-agent handoff: agents pass work to each other with @next:tag. Up to 15 automatic hops per cycle. - Tag-based routing: @claude, @codex, @copilot, @all. Messages go exactly where you want. - Agent sidebar: live activity panel showing what each agent is doing (file reads, shell commands, search queries). Toggle with Ctrl+B. - Cancel panel: press Esc to selectively cancel running agents while others keep working. - Animated splash screen: octopus mascot displayed when the terminal is idle. ## SDK Transparency llm-party uses official, publicly available SDKs. All authentication flows through each provider's own CLI login. No credentials are stored, proxied, or intercepted. | Provider | Official SDK | Published by | | -------- | --------------------------------- | ------------ | | Claude | @anthropic-ai/claude-agent-sdk | Anthropic | | Codex | @openai/codex-sdk | OpenAI | | Copilot | @github/copilot-sdk | GitHub | | Custom | Any Claude-compatible API (Ollama, GLM, etc.) | Via native CLI | ## Install Requires Bun runtime and at least one AI CLI installed and authenticated (claude, codex, or copilot). ```bash bun add -g llm-party-cli ``` Run inside any project directory: ```bash your-project/> llm-party ``` First run launches an interactive setup wizard that auto-detects installed CLIs. Custom providers (Ollama, GLM, etc.) can be added with just a URL and auth token. If your AI CLIs work on their own, they work inside llm-party. ## Configuration Edit ~/.llm-party/config.json to add or change agents: ```json { "agents": [ { "name": "Claude", "tag": "claude", "provider": "claude", "model": "opus" }, { "name": "Codex", "tag": "codex", "provider": "codex", "model": "gpt-5.2" }, { "name": "Copilot", "tag": "copilot", "provider": "copilot", "model": "gpt-4.1" } ] } ``` That's it. Name, tag, provider, model. No paths, no prompts, no usernames to configure. ## Safety Notice All agents run with full permissions. They can read, write, edit files and execute shell commands with zero approval gates. Always run in a disposable environment: a Docker container, a VM, or at minimum a throwaway git branch. ## Quick Facts - Package: llm-party-cli on npm - License: MIT - Runtime: Bun - Built by: AALA Solutions (aalasolutions.com) - Repository: github.com/aalasolutions/llm-party - Website: llm-party.party