Documentation Index
Fetch the complete documentation index at: https://docs.avocadostudio.dev/llms.txt
Use this file to discover all available pages before exploring further.
The orchestrator is not open source today. Avocado Studio is built from three layers: the site templates, the editor app, and the
@ai-site-editor/site-sdk are all source-distributed and live in the public repo — you can read, fork, and self-host them directly. The orchestrator (the brain that runs sessions, calls the LLMs, and serves draft state) is the one component that we don’t currently release under an open-source license. Docker is the supported path for running it on your own infrastructure. If you need access to the orchestrator image or source for self-hosting, open an issue or reach out — we’ll work with you.Where to host the orchestrator
The orchestrator is a small Node.js Fastify service. It’s stateful but not heavy — most production deployments fit comfortably on the smallest paid tier of any modern container host.Tested hosts
| Host | Notes |
|---|---|
| Render | The reference deployment — what we run internally today. Free tier is too small for real use; the smallest paid tier is fine. Use a Render disk for persistent state. |
| Fly.io | Works well — the always-on machine model fits the orchestrator’s long-lived sessions. Use a Fly Volume for /app/.data. |
| Railway | Works. Make sure you provision a volume for state persistence; ephemeral filesystems will lose sessions on every redeploy. |
| DigitalOcean App Platform | Works as a Web Service with an attached volume. |
| AWS ECS / Fargate, GCP Cloud Run (with min instances ≥ 1), Azure Container Apps | All work. Cloud Run requires min instances = 1 so the container doesn’t cold-start mid-session. |
| Kubernetes | Works as a single-replica StatefulSet with a PersistentVolumeClaim mounted at /app/.data. Do not run multiple replicas — see the constraints below. |
| Self-managed VPS (Hetzner, Linode, etc.) | The simplest option. docker compose up -d and a reverse proxy (Caddy, nginx, Traefik) for HTTPS. |
Resource requirements
- Memory: ~300–500 MB at idle, ~600–800 MB under typical load. A 1 GB instance is comfortable; the smallest “$7/month-ish” tier on most hosts is enough for early use.
- CPU: Mostly I/O-bound — the orchestrator spends most of its time waiting on LLM API calls, not computing. 0.5 vCPU is fine for a handful of concurrent sessions; 1 vCPU is comfortable for a small team.
- Disk: A persistent volume for
/app/.data(session state, telemetry, generated images). 1–5 GB is plenty for early use; generated-image storage is what grows fastest if you use AI image generation heavily. - Network: Outbound HTTPS to your chosen LLM providers (
api.anthropic.com,api.openai.com,generativelanguage.googleapis.com); inbound HTTPS from your editor and site origins. No inbound from end-users — only your editor and Next.js site need to reach it.
Critical hosting constraints
A few things matter regardless of which host you pick:- Persistent volume is required. The orchestrator survives restarts by writing snapshots to
/app/.data/orchestrator-state.json. If you mount that path on an ephemeral filesystem (Cloud Run without a volume, Heroku-style ephemeral dynos, default container hosts without disk attachment), every redeploy or container reschedule will wipe all sessions and undo history. Always attach a persistent volume — even 1 GB is enough. - SSE-friendly reverse proxy. The chat endpoint streams server-sent events for live editor updates. Some reverse proxies and CDNs buffer responses by default, which makes the editor look frozen until the full response lands. If you put a reverse proxy in front of the orchestrator, disable response buffering on
/chat/*and/sites-agent/*(in nginx:proxy_buffering off; in Caddy:flush_interval -1onreverse_proxy; in Cloudflare: bypass cache for these paths). - Long timeouts. Sites-agent runs (full URL migration, repo integration) can take several minutes. If your host has a default request timeout of 30s or 60s, the agent will be killed mid-run. Bump request timeouts to at least 10 minutes on the orchestrator’s routes — most hosts let you configure this per service.
- CORS for the editor’s origin. Set
ORCHESTRATOR_CORS_ORIGINSto include both your site and editor origins (HTTPS, no trailing slash). See CORS configuration below. - Public HTTPS reachable from your editor and site. Both the editor (browser) and your Next.js site (server-side draft fetches) need to call the orchestrator. If your editor is on
https://editor.example.comand your site is onhttps://www.example.com, the orchestrator needs to be on a URL both can reach — usually a public HTTPS endpoint likehttps://orchestrator.example.com.
Building the image
If you have access to the orchestrator source (theapps/orchestrator directory in the private workspace), you can build the image yourself from the repository root:
packages/shared, packages/migration-sdk).
Running standalone
Required environment variables
At minimum you need one AI provider key:| Variable | Description |
|---|---|
ANTHROPIC_API_KEY | Claude API key (recommended — most battle-tested) |
OPENAI_API_KEY | OpenAI API key |
GOOGLE_GENAI_API_KEY | Google Gemini API key |
CORS configuration
By default the container accepts requests from
http://localhost:3000 and http://localhost:4100. For production, set the allowed origins explicitly.State persistence
The orchestrator writes session state, telemetry, and generated images to/app/.data inside the container. Mount a volume there to persist data across restarts.
The image pre-configures these paths:
ORCHESTRATOR_STATE_FILE=/app/.data/orchestrator-state.jsonCHAT_TELEMETRY_FILE=/app/.data/chat-telemetry.ndjsonORCHESTRATOR_GENERATED_IMAGE_DIR=/app/.data/generated-images
Using docker-compose
Adocker-compose.yml at the repo root runs the orchestrator with sensible defaults:
orchestrator-data) and loads env vars from .env at the repo root.
Health check
The container includes a health check that pollshttp://127.0.0.1:4200/health every 30 seconds. Check status with:
Environment reference
See.env.example at the repo root for the complete list of environment variables. Common Docker overrides:
| Variable | Purpose |
|---|---|
PORT | HTTP port (default: 4200) |
NODE_ENV | production by default in the image |
ORCHESTRATOR_CORS_ORIGINS | Comma-separated list of allowed origins |
ORCHESTRATOR_STATE_FILE | Path to session state JSON |
CHAT_TELEMETRY_FILE | Path to telemetry NDJSON |
ORCHESTRATOR_GENERATED_IMAGE_DIR | Directory for generated images |
PUBLISH_TOKEN | Required token for /publish/* endpoints |
ACCESS_PASSWORD_HASH | Optional password hash for /auth/verify |
Running locally without Docker (contributors only)
This section requires access to the orchestrator source, which is not currently public — see the closed-source note at the top of the page. If you don’t have repo access, the Docker image is your supported path. If you do have access (early contributor, design partner, etc.), you can run the orchestrator directly from source for a faster dev loop.
Troubleshooting
Container exits immediately
Check logs:docker logs avocado-orchestrator. The most common cause is missing API keys or an invalid .env file.
CORS errors from editor or site
SetORCHESTRATOR_CORS_ORIGINS to include both the site and editor origins (no trailing slashes).
State not persisting
Ensure the volume is mounted at/app/.data and that the container user has write permissions.
Health check failing
Wait for the 10-second start period. If it still fails, check that the orchestrator is listening on0.0.0.0:4200 (it should be by default) and that no firewall is blocking the port.