Let your
agents
cook.
One CLI to rule them all. Give your agents secure access to APIs, MCP servers, OpenAPI specs, and local CLIs. No wrappers. No SDK plumbing.
See it work.
60 seconds. Zero install. Pick a provider type and watch.
110-endpoint spec → 85 tools. No --name, no TOML to write, no code to generate.
ati assist tells the agent which tools and what params. The agent runs them — real Apple stock price, real Tim Cook insider sells.
Same interface. Every time.
The agent doesn't know if it's talking to a REST API, an MCP server, a skill-generated provider, or a local CLI. It doesn't care.
Built by Agents, for Agents
The agent does
everything itself.
Install skills from GitHub. Ask what to do. Get a full workflow with commands. Execute. No human in the loop.
Two GitHub URLs → ATI reads each SKILL.md, calls Cerebras to generate provider manifests, tools are live. No TOML, no OpenAPI spec, no config.
ati assist loaded skills for fal-generate, fal-audio, and veed-fabric-lip-sync — the agent gets model-specific best practices, not just raw API docs.
Image generation → Text-to-speech → Lip-sync video. The agent chained three fal.ai models using the commands ati assist gave it. Same ati run every time.
Any agent
framework.
If your framework has a shell tool, ATI works. Give the agent Bash access, tell it about ATI in the system prompt. That's the whole integration.
from claude_agent_sdk import ClaudeAgentOptions, query options = ClaudeAgentOptions( system_prompt="""You have ATI on your PATH. - `ati tool search <query>` — find tools - `ati tool info <name>` — inspect a tool - `ati run <tool> --key value` — execute - `ati assist "<question>"` — get help like asking a colleague""", allowed_tools=["Bash"], ) # That's it. The agent discovers and runs tools on its own. async for msg in query(prompt="Research quantum computing papers", options=options): print(msg)
The agent calls ati tool search, picks the right tool, calls ati run. No human in the loop.
# OpenAI Agents SDK — same pattern agent = Agent(tools=[shell_tool], instructions=system_prompt) # LangChain agent = create_react_agent(llm, [ShellTool()], prompt=system_prompt) # Google ADK agent = Agent(tools=[shell_tool], instruction=system_prompt) # Pi SDK (TypeScript) session = createAgentSession({ tools: [createBashTool(cwd)], resourceLoader })
Security that
scales.
Three tiers. Start simple, graduate when you need to. Same ati run interface at every level.
Plain-text credentials.
Zero ceremony.
Store keys with ati key set. They go to ~/.ati/credentials with 0600 permissions. Also supports ATI_KEY_ env var prefix.
Local development, testing, prototyping. When you trust the machine and just want to move fast.
JWT Scoping
Each agent session gets a JWT with identity, permissions, and expiry. Wildcard scopes grant access to all tools from a provider.
Switch modes with one env var. The agent never changes its commands.