Cagent is Docker's multi-agent runtime for creating and orchestrating AI agents with specialized capabilities, tools, and knowledge. Each agent is defined in simple YAML files with model selection (OpenAI, Anthropic, Gemini, xAI, Mistral, Nebius, Docker Model Runner), instructions, and toolsets including MCP servers for containerized or standard tools via stdio, HTTP, or SSE transport.
The platform supports smart task delegation between agents, built-in reasoning tools (think, todo, memory), and pluggable RAG strategies with result reranking. Agents can be exposed as MCP tools for external clients, and Docker MCP Gateway enables secure access to containerized tools. Cross-platform installation via Homebrew or binary releases for Windows, macOS, and Linux.
Use Cases:
cagent π€A powerful, easy-to-use, customizable multi-agent runtime that orchestrates AI agents with specialized capabilities and tools, and the interactions between agents.
!cagent in action
cagent? β¨cagent lets you create and run intelligent AI agents, where each agent has
specialized knowledge, tools and capabilities.
Think of it as allowing you to quickly build, share and run a team of virtual experts that collaborate to solve complex problems for you.
And it's dead easy to use!
β οΈ Note: cagent is in active development, breaking changes are to be
expected β οΈ
Example basic_agent.yaml:
Creating agents with cagent is straightforward. They are described in a short .yaml file, like this one:
agents:
root:
model: openai/gpt-5-mini
description: A helpful AI assistant
instruction: |
You are a knowledgeable assistant that helps users with various tasks.
Be helpful, accurate, and concise in your responses.
Run it in a terminal with cagent run basic_agent.yaml.
Many more examples can be found here!
cagent supports MCP servers, enabling agents to use a wide variety of external
tools and services.
It supports three transport types: stdio, http and sse.
Giving an agent access to tools via MCP is a quick way to greatly improve its capabilities, the quality of its results and its general usefulness.
Get started quickly with the Docker MCP Toolkit (https://docs.docker.com/ai/mcp-catalog-and-toolkit/toolkit/) and catalog (https://docs.docker.com/ai/mcp-catalog-and-toolkit/catalog/)
Here, we're giving the same basic agent from the example above access to a
containerized duckduckgo mcp server and its tools by using Docker's MCP
Gateway:
agents:
root:
model: openai/gpt-5-mini
description: A helpful AI assistant
instruction: |
You are a knowledgeable assistant that helps users with various tasks.
Be helpful, accurate, and concise in your responses.
toolsets:
- type: mcp
ref: docker:duckduckgo # stdio transport
When using a containerized server via the Docker MCP gateway, you can configure any required settings/secrets/authentication using the Docker MCP Toolkit (https://docs.docker.com/ai/mcp-catalog-and-toolkit/toolkit/#example-use-the-github-official-mcp-server) in Docker Desktop.
Aside from the containerized MCP servers the Docker MCP Gateway provides, any standard MCP server can be used with cagent!
Here's an example similar to the above but adding read_file and write_file
tools from the rust-mcp-filesystem MCP server:
agents:
root:
model: openai/gpt-5-mini
description: A helpful AI assistant
instruction: |
You are a knowledgeable assistant that helps users with various tasks.
Be helpful, accurate, and concise in your responses. Write your search results to disk.
toolsets:
- type: mcp
ref: docker:duckduckgo
- type: mcp
command: rust-mcp-filesystem # installed with `cargo install rust-mcp-filesystem`
args: ["--allow-write", "."]
tools: ["read_file", "write_file"] # Optional: specific tools only
env:
- "RUST_LOG=debug"
See the USAGE docs for more detailed information and examples
cagent can expose agents as MCP tools via the cagent mcp command, allowing other MCP clients to use your agents.
Each agent in your configuration becomes an MCP tool with its description.
# Start MCP server with local file
cagent mcp ./examples/dev-team.yaml
# Or use an OCI artifact
cagent mcp agentcatalog/pirate
This exposes each agent as a tool (e.g., root, designer, awesome_engineer) that MCP clients can call:
{
"method": "tools/call",
"params": {
"name": "designer",
"arguments": {
"message": "Design a login page"
}
}
}
See MCP Mode documentation for detailed instructions on exposing your agents through MCP with Claude Desktop, Claude Code, and other MCP clients.
Install cagent with a single command using homebrew (https://brew.sh/)!
$ brew install cagent
Prebuilt binaries (https://github.com/docker/cagent/releases) for Windows, macOS and Linux can be found on the release page of the project's GitHub repository (https://github.com/docker/cagent/releases)
Once you've downloaded the appropriate binary for your platform, you may need to give it executable permissions. On macOS and Linux, this is done with the following command:
# linux amd64 build example
chmod +x /path/to/downloads/cagent-linux-amd64
You can then rename the binary to cagent and configure your PATH to be able
to find it (configuration varies by platform).
Based on the models you configure your agents to use, you will need to set the corresponding provider API key accordingly, all these keys are optional, you will likely need at least one of these, though:
# For OpenAI models
export OPENAI_API_KEY=your_api_key_here
# For Anthropic models
export ANTHROPIC_API_KEY=your_api_key_here
# For Gemini models
export GOOGLE_API_KEY=your_api_key_here
# For xAI models
export XAI_API_KEY=your_api_key_here
# For Nebius models
export NEBIUS_API_KEY=your_api_key_here
# For Mistral models
export MISTRAL_API_KEY=your_api_key_here
# Run an agent!
cagent run ./examples/pirate.yaml
# or specify a different starting agent from the config, useful for agent teams
cagent run ./examples/pirate.yaml -a root
# or run directly from an image reference here I'm pulling the pirate agent from the creek repository
cagent run creek/pirate
Multi-IDE maintenance toolkit extending free AugmentCode trials with cleanup engines database management code patching and automated backups
Official mobile Cherry Studio app for iOS/Android providing multi-LLM conversations AI assistants and theme support via React Native
Kubernetes log visualization tool transforming audit logs into interactive timelines and cluster diagrams for agentless troubleshooting across GKE and OSS clusters
PowerPoint plugin for scientific presentations with image auto-captions grid layouts LaTeX formulas code blocks and Markdown insertion capabilities
Desktop LLM client packaging entire codebases with AI-selected files direct API execution to GPT/Gemini/OpenRouter and prompt templates for developers
Android client for hanime1 with ExoPlayer custom themes download management dual layouts playlists and privacy features including app lock