Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.acornops.dev/llms.txt

Use this file to discover all available pages before exploring further.

AcornOps connects a central operations platform to Kubernetes workload clusters through outbound agents. Operators use the management console to investigate cluster state, run guided triage sessions, and coordinate safe remediation workflows.

What AcornOps gives you

  • A workspace model for grouping clusters, members, MCP servers, tool settings, and investigation history.
  • A browser management console for cluster inventory, findings, runbooks, members, settings, and chat-based troubleshooting.
  • A control plane that owns auth, sessions, workspaces, cluster registration, agent connections, run state, and API authorization.
  • An execution engine that performs troubleshooting runs and streams progress back to the control plane.
  • An LLM gateway that enforces run-scoped model and tool permissions before provider or MCP traffic leaves the platform.
  • A k8s agent that runs inside each workload cluster and connects outbound to the control plane.

Start with deployment

Prepare the platform, connect workload clusters, and expose the management console.

Review the architecture

Understand the control plane, execution engine, LLM gateway, and k8s agent boundaries.

Core workflow

  1. Deploy the central platform with either the Kubernetes chart or VM Compose stack.
  2. Sign in to the management console and create a workspace.
  3. Register a workload cluster from that workspace.
  4. Install the generated k8s agent command into the workload cluster.
  5. Review cluster findings, manage available tools, and start troubleshooting sessions.

Platform Components

Management console

Browser application for workspaces, clusters, investigations, sessions, and tools.

Control plane

API, auth, workspace state, agent WebSocket connections, and orchestration entrypoint.

Execution engine

Run execution, tool-call coordination, and Redis-backed run reservation.

LLM gateway

Provider routing, model access, secrets handling, and LLM request auditing.

k8s agent

Outbound workload-cluster connector for snapshots, logs, and tool execution.

Public surfaces

  • Console: https://console.acornops.dev/
  • Public docs: https://docs.acornops.dev/
  • Browser/control-plane API: https://acornops.dev/api/v1
  • Agent WebSocket: wss://acornops.dev/api/v1/agent/connect
Execution engine and LLM gateway are internal services. They should not have public DNS records or public ingress routes in production.

Where to go next

Kubernetes

Helm deployment for the central platform, with operator-provided Postgres and Redis.

VM compose

Docker Compose deployment path for single-VM environments.

Configuration

Required secrets, OIDC settings, LLM provider settings, and runtime limits.

API and auth

Browser session flow, workspace APIs, runs, webhooks, and role capabilities.