Documentation Index
Fetch the complete documentation index at: https://docs.acornops.dev/llms.txt
Use this file to discover all available pages before exploring further.
Configuration is split between public host settings, Kubernetes or Compose deployment values, and secret values. Keep secrets out of source control and inject them through the platform secret mechanism for your deployment target.
Public hosts
| Setting | Default production value | Used by |
|---|
| Platform public URL | https://acornops.dev | Primary API route examples and agent install commands |
| Management console URL | https://console.acornops.dev | Browser app origin, same-origin /api calls, and default OIDC callback derivation |
| Public docs URL | https://docs.acornops.dev | Documentation links |
| Agent WebSocket URL | wss://acornops.dev/api/v1/agent/connect | k8s agent connections |
The platform route and the management console route are separate, but both default deployment paths proxy /api to the control plane. The management console uses its own origin for browser session flows so cookies stay same-origin.
Required secret keys
The Kubernetes chart defaults to an existing Secret named acornops-platform-secrets. These keys are required for the central platform:
| Key | Purpose |
|---|
CONTROL_PLANE_DATABASE_URL | Control-plane Postgres connection |
CONTROL_PLANE_REDIS_URL | Control-plane Redis connection |
OIDC_CLIENT_SECRET | Browser sign-in client secret |
ORCH_SERVICE_TOKEN | Execution-engine and builtin MCP calls into control plane |
WEBHOOK_SECRET_ENCRYPTION_KEY | Encryption for webhook signing secrets |
EXECUTION_ENGINE_REDIS_URL | Execution-engine Redis connection |
EXECUTION_ENGINE_DISPATCH_TOKEN | Control-plane dispatch auth into execution engine |
LLM_GATEWAY_DATABASE_URL | LLM-gateway Postgres connection |
LLM_GATEWAY_REDIS_URL | LLM-gateway Redis connection |
LLM_GATEWAY_ADMIN_TOKEN | Control-plane admin auth into LLM gateway |
SECRETS_KEK_BASE64 | LLM-gateway database secret encryption key |
Optional provider and secret-backend keys include:
| Key | Purpose |
|---|
OPENAI_API_KEY | OpenAI provider access |
ANTHROPIC_API_KEY | Anthropic provider access |
GEMINI_API_KEY | Gemini provider access |
VAULT_TOKEN | Vault secret-backend access when Vault is enabled |
Generate unique values for every internal token and encryption key per environment.
OIDC
The control plane owns OIDC login and callback handling:
- Login entrypoint:
GET /api/v1/auth/oidc/login?return_to=<management-console-url>
- Callback entrypoint:
GET /api/v1/auth/oidc/callback
For the default Kubernetes and VM Compose settings, register this redirect URI with your provider:
https://console.acornops.dev/api/v1/auth/oidc/callback
That URL is still served by the control plane through the console host’s /api proxy. If you override controlPlane.oidc.redirectUri or OIDC_REDIRECT_URI, register the exact override value instead. Registering only https://acornops.dev/api/v1/auth/oidc/callback will fail unless your deployment is configured to use that URL as the OIDC redirect URI.
Common OIDC settings:
| Setting | Notes |
|---|
| Issuer URL | Provider issuer used for discovery and token validation |
| Public issuer URL | Optional override when internal and public issuer URLs differ |
| Client ID | OIDC client configured for AcornOps |
| Client secret | Stored in the platform secret bundle |
| Scopes | Defaults to openid profile email |
| Token endpoint auth method | Defaults to client-secret based auth |
Password and device auth
Password auth is disabled by default for production-style deployments. Device auth can be enabled for CLI or device-style flows with an allow-list of client IDs.
Development deployments may expose a dev-login endpoint. Do not enable dev-login in production.
LLM providers and run limits
The control plane sets default model policy and runtime budgets for runs:
| Setting area | Examples |
|---|
| Providers | openai, anthropic, gemini |
| Models | Provider-specific allowed model list |
| Runtime limits | max runtime, max steps, max tool calls, duplicate tool-call limit |
| Output limits | max context tokens, max output tokens, budget cents |
| Sampling | default temperature |
The LLM gateway enforces the run-scoped JWT minted by the control plane. It should not infer provider, model, or tool permissions from request body fields alone.
MCP egress policy
Remote MCP servers are configured per workspace and cluster. In production, the gateway should require HTTPS and block private, local, and reserved network targets unless you intentionally allow specific hosts.
Use allow-lists for trusted internal MCP endpoints instead of broad private-network access.
Webhooks
Webhook signing secrets are generated per subscription and returned only once at creation time. The control plane stores encrypted webhook secrets and signs deliveries with HMAC-SHA256.
Webhook delivery is best-effort. Consumers should handle duplicate events and should verify signatures before processing payloads.