Are you an LLM? Read llms.txt for a summary of the docs, or llms-full.txt for the full context.
Skip to content

Quick Start

1. Initialize a project

npx amodal init

This scaffolds an .amodal/ directory with a config file, sample skill, and sample knowledge document. The init command is interactive — it asks for your product type and sets up appropriate templates.

2. Configure your provider

Set your LLM provider credentials. Amodal auto-detects from environment variables:

# Pick one:
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
export GOOGLE_API_KEY=...

Or configure explicitly in .amodal/config.json:

{
  "name": "My Agent",
  "provider": "anthropic",
  "model": "claude-sonnet-4-20250514"
}

3. Add a connection

Install a pre-built plugin or create a custom connection:

# Install a plugin (e.g., Slack, GitHub, Datadog)
amodal connect slack
 
# Or sync from an OpenAPI spec
amodal sync --from https://api.example.com/openapi.json

Connections live in .amodal/connections/ and give the agent both API access and documentation.

4. Start the dev server

amodal dev

This starts the runtime server on localhost:3847 with:

  • Hot reload — edit any .amodal/ file and the agent updates instantly
  • File watching with 300ms debounce
  • Session management with TTL

5. Chat with your agent

amodal chat

This opens a terminal chat UI (React-based TUI with Ink). You can also connect to a remote server:

amodal chat --url http://localhost:3847

Or resume a previous session:

amodal chat --resume latest

6. Validate and inspect

amodal validate    # Check for missing connections, config issues
amodal inspect     # Show compiled context with token counts

What's Next