$ npx @epicai/legion

35,835 tools.
469 tokens.

One self-hosted MCP server. Your context window only loads what the query needs. Legion routes across 4135 integrations so your AI client sees 3 tools instead of thousands.

View on GitHubnpm install

How it works

Your AI client connects to Legion.
Legion connects to everything else.

01

Install

npx @epicai/legion. The wizard detects your AI clients, configures MCP, and connects curated zero-credential integrations. 30 seconds.

02

Route

BM25 + miniCOIL scoring narrows 35,835 tools to 8. Zero inference cost. Your LLM sees only the shortlist.

03

Call

Legion connects to the integration. Calls the tool. Returns the result. Your credentials never leave your machine.

35,835
Tools
REST + MCP, deduplicated
4,135
Integrations
624 REST, 3,242 MCP, 246 dual
469
Tokens
Total context window cost
71
Zero-credential
Working out of the box

Commands

Bare subcommands. No dashes.

$ legion# setup wizard
$ legion query "..."# route and call
$ legion list# curated + custom
$ legion search <term># search 4,135 integrations
$ legion add <id># add an integration
$ legion configure# connect your credentials
$ legion health# check status
$ legion serve# start MCP server
$ legion help# all commands

Architecture

Two-tier routing.
Zero wasted tokens.

Tier 1

ToolPreFilter

BM25 + miniCOIL sparse scoring. Ranks all tools against your query. Top 8 pass through. Zero inference cost.

Tier 2

LLM Selection

Your AI client receives the shortlist and decides which tool to call. One inference call. Context stays small.

Governance

Read: auto. Write: escalate. Delete: approve.

auto

Read, query, search, list. Executes immediately.

escalate

Write, update, modify. Executes, flagged for review.

approve

Delete, revoke, terminate. Blocks until you approve.