Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.avocadostudio.dev/llms.txt

Use this file to discover all available pages before exploring further.

The orchestrator is not open source today. Avocado Studio is built from three layers: the site templates, the editor app, and the @ai-site-editor/site-sdk are all source-distributed and live in the public repo — you can read, fork, and self-host them directly. The orchestrator (the brain that runs sessions, calls the LLMs, and serves draft state) is the one component that we don’t currently release under an open-source license. Docker is the supported path for running it on your own infrastructure. If you need access to the orchestrator image or source for self-hosting, open an issue or reach out — we’ll work with you.
For self-hosted deployments of the orchestrator, you run it as a Docker container on any host that supports long-lived containers with a persistent volume. The site templates and editor app remain source-distributed and deploy as static / serverless apps to Vercel, Netlify, or any other host that runs Next.js / Vite.

Where to host the orchestrator

The orchestrator is a small Node.js Fastify service. It’s stateful but not heavy — most production deployments fit comfortably on the smallest paid tier of any modern container host.

Tested hosts

HostNotes
RenderThe reference deployment — what we run internally today. Free tier is too small for real use; the smallest paid tier is fine. Use a Render disk for persistent state.
Fly.ioWorks well — the always-on machine model fits the orchestrator’s long-lived sessions. Use a Fly Volume for /app/.data.
RailwayWorks. Make sure you provision a volume for state persistence; ephemeral filesystems will lose sessions on every redeploy.
DigitalOcean App PlatformWorks as a Web Service with an attached volume.
AWS ECS / Fargate, GCP Cloud Run (with min instances ≥ 1), Azure Container AppsAll work. Cloud Run requires min instances = 1 so the container doesn’t cold-start mid-session.
KubernetesWorks as a single-replica StatefulSet with a PersistentVolumeClaim mounted at /app/.data. Do not run multiple replicas — see the constraints below.
Self-managed VPS (Hetzner, Linode, etc.)The simplest option. docker compose up -d and a reverse proxy (Caddy, nginx, Traefik) for HTTPS.

Resource requirements

  • Memory: ~300–500 MB at idle, ~600–800 MB under typical load. A 1 GB instance is comfortable; the smallest “$7/month-ish” tier on most hosts is enough for early use.
  • CPU: Mostly I/O-bound — the orchestrator spends most of its time waiting on LLM API calls, not computing. 0.5 vCPU is fine for a handful of concurrent sessions; 1 vCPU is comfortable for a small team.
  • Disk: A persistent volume for /app/.data (session state, telemetry, generated images). 1–5 GB is plenty for early use; generated-image storage is what grows fastest if you use AI image generation heavily.
  • Network: Outbound HTTPS to your chosen LLM providers (api.anthropic.com, api.openai.com, generativelanguage.googleapis.com); inbound HTTPS from your editor and site origins. No inbound from end-users — only your editor and Next.js site need to reach it.

Critical hosting constraints

A few things matter regardless of which host you pick:
Single instance only. The orchestrator keeps all session state — draft pages, undo history, chat threads, version log, site configs — in in-memory Maps, with periodic snapshots to disk. Do not run multiple replicas behind a load balancer: requests would round-robin between processes that don’t share state, and sessions would appear to randomly lose their work. Horizontal scaling is on the roadmap but not implemented today. Configure your host for exactly one instance and scale vertically (more memory / CPU on a single machine) if you need more capacity.
  • Persistent volume is required. The orchestrator survives restarts by writing snapshots to /app/.data/orchestrator-state.json. If you mount that path on an ephemeral filesystem (Cloud Run without a volume, Heroku-style ephemeral dynos, default container hosts without disk attachment), every redeploy or container reschedule will wipe all sessions and undo history. Always attach a persistent volume — even 1 GB is enough.
  • SSE-friendly reverse proxy. The chat endpoint streams server-sent events for live editor updates. Some reverse proxies and CDNs buffer responses by default, which makes the editor look frozen until the full response lands. If you put a reverse proxy in front of the orchestrator, disable response buffering on /chat/* and /sites-agent/* (in nginx: proxy_buffering off; in Caddy: flush_interval -1 on reverse_proxy; in Cloudflare: bypass cache for these paths).
  • Long timeouts. Sites-agent runs (full URL migration, repo integration) can take several minutes. If your host has a default request timeout of 30s or 60s, the agent will be killed mid-run. Bump request timeouts to at least 10 minutes on the orchestrator’s routes — most hosts let you configure this per service.
  • CORS for the editor’s origin. Set ORCHESTRATOR_CORS_ORIGINS to include both your site and editor origins (HTTPS, no trailing slash). See CORS configuration below.
  • Public HTTPS reachable from your editor and site. Both the editor (browser) and your Next.js site (server-side draft fetches) need to call the orchestrator. If your editor is on https://editor.example.com and your site is on https://www.example.com, the orchestrator needs to be on a URL both can reach — usually a public HTTPS endpoint like https://orchestrator.example.com.

Building the image

If you have access to the orchestrator source (the apps/orchestrator directory in the private workspace), you can build the image yourself from the repository root:
docker build -f apps/orchestrator/Dockerfile -t avocado-orchestrator:latest .
The build is multi-stage and uses the pnpm workspace to install only the orchestrator’s dependencies (packages/shared, packages/migration-sdk).

Running standalone

docker run -d \
  --name avocado-orchestrator \
  -p 4200:4200 \
  --env-file .env \
  -v avocado-data:/app/.data \
  avocado-orchestrator:latest

Required environment variables

At minimum you need one AI provider key:
VariableDescription
ANTHROPIC_API_KEYClaude API key (recommended — most battle-tested)
OPENAI_API_KEYOpenAI API key
GOOGLE_GENAI_API_KEYGoogle Gemini API key

CORS configuration

By default the container accepts requests from http://localhost:3000 and http://localhost:4100. For production, set the allowed origins explicitly.
ORCHESTRATOR_CORS_ORIGINS=https://your-site.example.com,https://editor.example.com

State persistence

The orchestrator writes session state, telemetry, and generated images to /app/.data inside the container. Mount a volume there to persist data across restarts. The image pre-configures these paths:
  • ORCHESTRATOR_STATE_FILE=/app/.data/orchestrator-state.json
  • CHAT_TELEMETRY_FILE=/app/.data/chat-telemetry.ndjson
  • ORCHESTRATOR_GENERATED_IMAGE_DIR=/app/.data/generated-images

Using docker-compose

A docker-compose.yml at the repo root runs the orchestrator with sensible defaults:
docker compose up -d
docker compose logs -f orchestrator
docker compose down
The compose file uses a named volume (orchestrator-data) and loads env vars from .env at the repo root.

Health check

The container includes a health check that polls http://127.0.0.1:4200/health every 30 seconds. Check status with:
docker inspect --format='{{.State.Health.Status}}' avocado-orchestrator

Environment reference

See .env.example at the repo root for the complete list of environment variables. Common Docker overrides:
VariablePurpose
PORTHTTP port (default: 4200)
NODE_ENVproduction by default in the image
ORCHESTRATOR_CORS_ORIGINSComma-separated list of allowed origins
ORCHESTRATOR_STATE_FILEPath to session state JSON
CHAT_TELEMETRY_FILEPath to telemetry NDJSON
ORCHESTRATOR_GENERATED_IMAGE_DIRDirectory for generated images
PUBLISH_TOKENRequired token for /publish/* endpoints
ACCESS_PASSWORD_HASHOptional password hash for /auth/verify

Running locally without Docker (contributors only)

This section requires access to the orchestrator source, which is not currently public — see the closed-source note at the top of the page. If you don’t have repo access, the Docker image is your supported path. If you do have access (early contributor, design partner, etc.), you can run the orchestrator directly from source for a faster dev loop.
For local development with source access, run the orchestrator directly via pnpm from the repository root:
pnpm install
pnpm dev:start        # starts site + editor + orchestrator via tsx
# or run just the orchestrator:
pnpm --filter @ai-site-editor/orchestrator dev
The Dockerfile is an additional distribution option, not a replacement for the source-based dev workflow.

Troubleshooting

Container exits immediately

Check logs: docker logs avocado-orchestrator. The most common cause is missing API keys or an invalid .env file.

CORS errors from editor or site

Set ORCHESTRATOR_CORS_ORIGINS to include both the site and editor origins (no trailing slashes).

State not persisting

Ensure the volume is mounted at /app/.data and that the container user has write permissions.

Health check failing

Wait for the 10-second start period. If it still fails, check that the orchestrator is listening on 0.0.0.0:4200 (it should be by default) and that no firewall is blocking the port.