A product by Selaware

Oculis

AI Agent Observability.

Every agent. Every run. Every dollar. In one view.

Oculis is the control plane for AI operations. Install one collector on your infrastructure and get instant visibility into every AI agent across every machine — with costs, health, relationships, and alerts out of the box.

  • 5 min from install to first insight
  • 50MB RAM per collector
  • Zero code changes required
  • 6+ frameworks detected automatically
The Operating Loop

Discover. Analyze. Optimize. Act.

Four phases. Running continuously. Across every agent in your organization.

01

Discover

Collector finds every AI workload — named or shadow, prod or dev, your SDK or a custom one.

02

Analyze

Live costs, success rates, errors, and topology — priced server-side with real provider rates.

03

Optimize

Savings recommendations with projected dollar impact: model swaps, duplicates, error waste.

04

Act

Alert the right person. Enforce policies. Escalate incidents — before customers notice.

Discovery

Find every agent. Including the ones you forgot about.

Shadow AI is real. Teams spin up agents on laptops, in CI, in prod. One collector — running as a container, binary, or systemd service — surveys every process and detects AI workloads automatically.

  • Framework detection for LangChain, CrewAI, AutoGen, LlamaIndex, Haystack, OpenClaw
  • Hardware + host context — CPU, RAM, GPU usage correlated with agent activity
  • Topology graph — who calls whom, which models, which cache layers
  • Credential audit — know which tokens each agent uses, catch leaks early

A live topology. Teal nodes = active agents. Dashed amber = newly discovered (not yet classified).

This month 32% vs last month
$4,217
research-assistant
$1,451
invoice-classifier
$1,204
support-triage
$892
content-drafter
$421
+ 43 others
$249
Cost Intelligence

Every dollar. Every run. Accounted for.

We price every run server-side using live provider rates. No SDK trust, no daily CSV imports, no stale pricing tables — just the real cost attributed to the agent, model, user, and team that spent it.

  • 50+ models priced across OpenAI, Anthropic, Google, Mistral, DeepSeek, Meta, Cohere, OpenRouter
  • Cache savings in dollars — first-class metric, not buried in logs
  • Spike detection with automatic root-cause attribution (which prompt, which user, which model)
  • Multi-tier allocation — by agent, team, department, product, customer (if you tag)
Run Analytics

Every run. Searchable. Explainable.

Every LLM call, every tool call, every retry — captured as a run with full context. Trace the $3,000 prompt back to the user who wrote it. Find the 5% of runs that are eating 50% of the budget. Build evidence, not hypotheses.

  • Full-text search across prompts, responses, errors, and metadata
  • Trace-based grouping — see the full chain of calls that made up one user action
  • Error burst detection — flags 5x+ spikes in failure rate automatically
  • Percentile latency (p50, p95, p99) per agent, per model, per prompt template
247 results
10:42 invoice-classifier claude-3-5-sonnet timeout $0.42
10:41 research-assistant gpt-4-turbo 200 $1.12
10:40 invoice-classifier claude-3-5-sonnet rate_limit $0.00
10:39 support-triage gpt-4o-mini 200 $0.03
10:39 content-drafter claude-3-haiku retry $0.08
+ 242 more
Install

One command. Five minutes.

The collector runs as a Docker container, a binary, or a systemd service. 50MB RAM, no external dependencies, no code changes to your agents.

oculis-collector · installation
# 1. Grab a deploy token from dashboard.oculis.selaware.ai
# 2. Run one line on each host (or in your Kubernetes cluster)

$ docker run -d \
    --name oculis-collector \
    --restart unless-stopped \
    -e OCULIS_TOKEN=oc_dt_abc123... \
    oculis/collector:latest

[+] Connecting to api.oculis.selaware.ai ... OK
[+] Scanning local processes ... found 12 AI workloads
[+] Detected: langchain (3), crewai (2), openclaw (4), custom (3)
[+] Streaming run telemetry ...

\u2713 Collector online. See your agents at dashboard.oculis.selaware.ai

Kubernetes

Helm chart with auto-discovery of pods across namespaces.

Systemd

Single binary + unit file. Runs natively on any Linux host.

Self-hosted

Enterprise: run the full stack in your VPC. No data leaves.

Ready?

See Oculis in your environment.

Book a 30-minute demo. We'll install the collector on a test host and walk through your real agents together.