Observer

AI-powered log security for Linux servers — detects attacks, captures evidence, verifies outcomes

Observer watches your Docker container logs and host system events — sshd, sudo, kernel — classifies threats using an LLM, captures HTTP response evidence, and verifies whether attacks actually succeeded before sending an alert. One accurate finding instead of fifty false alarms.

It ships as a single static Go binary with no runtime dependencies. Docker containers are monitored automatically if Docker is present. If not, Observer watches everything through journald — the policy engine, LLM classification, and email alerts all work on a bare metal server with nothing but sshd.

Current status

Versionv1.0 (April 2026)
LicenseAGPL-3.0 (open core)
RuntimeSingle Go binary, systemd service
LLMAny OpenAI-compatible endpoint (OpenAI, Ollama, self-hosted)
DashboardFree hosted dashboard at vaultguardian.io
FootprintBare metal Linux, Docker, Docker Swarm

Where to go next

Core principles

Observer is built around a few hard-edged design decisions:

  • Deterministic first, AI second. Known facts resolve via rules. The LLM only handles what rules can't. Policy is identity, not inference.
  • Evidence before escalation. No email fires until Observer knows what the server actually returned. Attacks that failed stay quiet.
  • One pattern, one payment. Every LLM classification becomes a reusable pattern. Cache hits are free. 97%+ hit rate in production.
  • Observer observes. By design, it doesn't block, quarantine, or respond. Response actions (fail2ban, iptables) will be opt-in only.
  • Works everywhere. The same binary runs on bare metal, Docker hosts, and Docker Swarm clusters. No sidecars, no agents, no daemons on your containers.

On this page