One self-hosted MCP server. Your context window only loads what the query needs. Legion routes across 4135 integrations so your AI client sees 3 tools instead of thousands.
How it works
npx @epicai/legion. The wizard detects your AI clients, configures MCP, and connects curated zero-credential integrations. 30 seconds.
BM25 + miniCOIL scoring narrows 35,835 tools to 8. Zero inference cost. Your LLM sees only the shortlist.
Legion connects to the integration. Calls the tool. Returns the result. Your credentials never leave your machine.
Commands
Architecture
BM25 + miniCOIL sparse scoring. Ranks all tools against your query. Top 8 pass through. Zero inference cost.
Your AI client receives the shortlist and decides which tool to call. One inference call. Context stays small.
Governance
Read, query, search, list. Executes immediately.
Write, update, modify. Executes, flagged for review.
Delete, revoke, terminate. Blocks until you approve.