Let your
agents
cook.

One CLI to rule them all. Give your agents secure access to APIs, MCP servers, OpenAPI specs, and local CLIs. No wrappers. No SDK plumbing.

See it work.

60 seconds. Zero install. Pick a provider type and watch.

1 Import an API from its spec
Terminal
$ ati provider import-openapi https://finnhub.io/api/v2/spec.json
Saved manifest to ~/.ati/manifests/finnhub.toml
Imported 85 operations from "Finnhub — Real-time stock quotes..."
$ ati key set finnhub_api_key "your-key-here"
Stored key: finnhub_api_key
$ ati tool list --provider finnhub | head -5
DESCRIPTION
PROVIDER
TOOL
Symbol Lookup
finnhub
finnhub__symbol-search
Company Profile
finnhub
finnhub__company-profile2
Quote
finnhub
finnhub__quote
Insider Transactions
finnhub
finnhub__insider-transactions
Basic Financials
finnhub
finnhub__company-basic-...

110-endpoint spec → 85 tools. No --name, no TOML to write, no code to generate.

2 Ask what to do, then run it
Terminal
$ ati assist finnhub "research Apple stock — price, insider activity, and sentiment"
Here are the exact commands to research Apple (AAPL) stock:
1. Current Price
ati run finnhub__quote --symbol AAPL
2. Insider Transactions
ati run finnhub__insider-transactions --symbol AAPL
3. News Sentiment
ati run finnhub__news-sentiment --symbol AAPL
$ ati run finnhub__quote --symbol AAPL
c: 262.52 ← current price
d: -1.23 ← change
dp: -0.4664 ← percent change
h: 266.15 ← day high
l: 261.43 ← day low
o: 264.65 ← open
pc: 263.75 ← previous close
$ ati run finnhub__insider-transactions --symbol AAPL
data:
- name: "COOK TIMOTHY D", transactionCode: "S"
change: -59751, share: 3280295
transactionPrice: 257.57, filingDate: "2025-10-03"

ati assist tells the agent which tools and what params. The agent runs them — real Apple stock price, real Tim Cook insider sells.

Same interface. Every time.

ati run <tool> --arg value

The agent doesn't know if it's talking to a REST API, an MCP server, a skill-generated provider, or a local CLI. It doesn't care.

Built by Agents, for Agents

The agent does
everything itself.

Install skills from GitHub. Ask what to do. Get a full workflow with commands. Execute. No human in the loop.

1 Agent installs skills from GitHub
Terminal
# Agent finds community skills on GitHub and installs them
$ ati skill install https://github.com/fal-ai-community/skills#fal-generate
Generating manifest for 'fal' from SKILL.md...
Generated manifest for 'fal' at ~/.ati/manifests/fal.toml
Hint: run `ati key set fal_api_key <your-key>` to configure credentials.
Installed 'fal-generate' to ~/.ati/skills/fal-generate
$ ati skill install https://github.com/fal-ai-community/skills#fal-audio
Generating manifest for 'fal-audio' from SKILL.md...
Generated manifest for 'fal-audio' at ~/.ati/manifests/fal-audio.toml
Installed 'fal-audio' to ~/.ati/skills/fal-audio
$ ati key set fal_api_key sk-your-key-here
Stored key: fal_api_key
$ ati tool list --provider fal
DESCRIPTION
TOOL
Submit a generation job to fal.ai queue
fal__submit
Check status of a queued fal.ai job
fal__status
Get the result of a completed fal.ai job
fal__result
Cancel a queued fal.ai job
fal__cancel

Two GitHub URLs → ATI reads each SKILL.md, calls Cerebras to generate provider manifests, tools are live. No TOML, no OpenAPI spec, no config.

2 Agent asks how to do it
ati assist
$ ati assist "trying to make a lip sync video of tech streamer praising ati.tools"
Great project! Here's the workflow to create a lip-synced tech streamer video praising ati.tools:
Step 1: Generate the tech streamer face
ati run fal__submit --endpoint_id "fal-ai/flux-2-turbo" \
--prompt "Professional tech streamer, young male with glasses,
front-facing portrait, good lighting, neutral expression,
4K quality" \
--image_size "portrait_4_3" --num_images 1
Save the request_id and get the image URL with fal__result.
Step 2: Generate the audio (praise script)
ati run fal__submit --endpoint_id "fal-ai/elevenlabs/tts/eleven-v3" \
--text "Hey everyone, I've been using ati.tools and it's
absolutely incredible! The CLI tool integration is seamless,
and the AI agent capabilities are next-level. If you're
serious about AI automation, you need to check this out!" \
--voice_id "cjVigY5qzO86Huf0OWal" \
--model_id "eleven_multilingual_v2"
Save the request_id and get the audio URL with fal__result.
Step 3: Lip-sync the video
ati run fal__submit --endpoint_id "fal-ai/veed/fabric/1.0" \
--image_url "YOUR_IMAGE_URL_FROM_STEP_1" \
--audio_url "YOUR_AUDIO_URL_FROM_STEP_2"
Check status with fal__status, then get the final video URL with fal__result.
Tips:
- Use voice_id "cjVigY5qzO86Huf0OWal" for an energetic, friendly male voice
- Keep the script under 60 seconds for best lip-sync quality
- The face image should be front-facing with good lighting and neutral expression

ati assist loaded skills for fal-generate, fal-audio, and veed-fabric-lip-sync — the agent gets model-specific best practices, not just raw API docs.

3 Agent executes the whole workflow
Terminal
# Step 1 — Generate the face
$ ati run fal__submit \
--endpoint_id "fal-ai/flux-2-turbo" \
--prompt "Professional tech streamer, young male with glasses, front-facing portrait" \
--image_size "portrait_4_3"
request_id: 1d491d8e-5c22-417b-a62b-471aa7f380e3
$ ati run fal__result --endpoint_id "fal-ai/flux-2-turbo" --request_id "1d491d8e..."
images: [url: "https://v3b.fal.media/files/.../streamer.jpg"]
# Step 2 — Generate speech with ElevenLabs
$ ati run fal__submit \
--endpoint_id "fal-ai/elevenlabs/tts/eleven-v3" \
--text "Hey everyone, I've been using ati.tools and it's absolutely incredible!" \
--voice_id "cjVigY5qzO86Huf0OWal"
request_id: f9b24972-9ea9-47bd-9e6c-1fc8f48c70c5
$ ati run fal__result --endpoint_id "fal-ai/elevenlabs" --request_id "f9b24972..."
audio: url: "https://v3b.fal.media/files/.../output.mp3"
# Step 3 — Lip-sync with VEED Fabric
$ ati run fal__submit \
--endpoint_id "veed/fabric-1.0" \
--image_url "https://v3b.fal.media/files/.../streamer.jpg" \
--audio_url "https://v3b.fal.media/files/.../output.mp3"
request_id: 1c7bdab9-3572-45fe-829d-c5c87071e7d9
$ ati run fal__result --endpoint_id "veed/fabric-1.0" --request_id "1c7bdab9..."
video: url: "https://v3b.fal.media/files/.../lipsync.mp4"
Done. Three models, one interface, zero config.

Image generation → Text-to-speech → Lip-sync video. The agent chained three fal.ai models using the commands ati assist gave it. Same ati run every time.

Any agent
framework.

If your framework has a shell tool, ATI works. Give the agent Bash access, tell it about ATI in the system prompt. That's the whole integration.

claude agent sdk
from claude_agent_sdk import ClaudeAgentOptions, query

options = ClaudeAgentOptions(
    system_prompt="""You have ATI on your PATH.
- `ati tool search <query>` — find tools
- `ati tool info <name>` — inspect a tool
- `ati run <tool> --key value` — execute
- `ati assist "<question>"` — get help like asking a colleague""",
    allowed_tools=["Bash"],
)

# That's it. The agent discovers and runs tools on its own.
async for msg in query(prompt="Research quantum computing papers", options=options):
    print(msg)

The agent calls ati tool search, picks the right tool, calls ati run. No human in the loop.

# OpenAI Agents SDK — same pattern
agent = Agent(tools=[shell_tool], instructions=system_prompt)

# LangChain
agent = create_react_agent(llm, [ShellTool()], prompt=system_prompt)

# Google ADK
agent = Agent(tools=[shell_tool], instruction=system_prompt)

# Pi SDK (TypeScript)
session = createAgentSession({ tools: [createBashTool(cwd)], resourceLoader })

Security that
scales.

Three tiers. Start simple, graduate when you need to. Same ati run interface at every level.

Simplest

Plain-text credentials.
Zero ceremony.

Store keys with ati key set. They go to ~/.ati/credentials with 0600 permissions. Also supports ATI_KEY_ env var prefix.

$ ati key set github_token ghp_abc123
Stored key: github_token
$ ati key list
github_token ghp-...c123
finnhub_key sk-...here
When to use

Local development, testing, prototyping. When you trust the machine and just want to move fast.

Your Machine
Agent
ati run
ATI
~/.ati/credentials 0600
~/.ati/manifests/*.toml
ATI reads credentials from disk on every call

JWT Scoping

Each agent session gets a JWT with identity, permissions, and expiry. Wildcard scopes grant access to all tools from a provider.

tool:web_search One specific tool
tool:github__* All GitHub MCP tools
help Access to ati assist
* Everything (dev only)
# Issue a scoped token
$ ati token issue \
--sub agent-7 \
--scope "tool:clinicaltrials__* tool:finnhub__* help" \
--ttl 3600
eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9...
# Inspect it
$ ati token inspect $TOKEN
sub: agent-7
scope: tool:clinicaltrials__* tool:finnhub__* help
exp: 2026-03-04T21:30:00Z

Switch modes with one env var. The agent never changes its commands.