Cortex (AI Chat)¶
Cortex is an AI-powered chat interface that lets you interact with your machines through natural language. Ask questions, run diagnostics, manage processes, and execute commands — all through conversation.
Overview¶
Cortex connects an LLM (Claude or OpenAI) to your machines via 24 specialized tools organized into three tiers:
| Tier | Type | Approval | Tools |
|---|---|---|---|
| Tier 1 | Read-only | Auto-approved | 10 tools — system info, process lists, logs, metrics |
| Tier 2 | Process management | Auto-approved | 5 tools — restart, kill, start, set launch mode, screenshot |
| Tier 3 | Privileged | Requires confirmation | 9 tools — run commands/scripts, read/write files, reboot/shutdown |
Setup¶
Configure an LLM Provider¶
Cortex needs an API key from either Anthropic (Claude) or OpenAI.
User-Level Key¶
- In the dashboard, open Cortex
- Click the settings gear icon
- Select Provider: Anthropic or OpenAI
- Enter your API key
- Optionally select a model (defaults to the latest)
- Click Save
Your key is encrypted and stored in Firestore — it's never exposed to the client.
Site-Level Key (Admin)¶
Admins can set a shared API key for the entire site:
- In the dashboard, open Cortex
- Click the settings gear icon
- Switch to the "Site Key" tab
- Configure provider and key
- All users on this site can use Cortex without their own key
Priority: User key takes precedence over site key.
Using Cortex¶
- Open Cortex from the dashboard
- Select a machine to talk to
- Type a message in natural language
Example Conversations¶
Check system health:
"How's the system doing? Any issues?"
Cortex calls
get_system_infoandget_agent_health, then summarizes CPU, memory, disk usage, and connection status.
Manage processes:
"Restart TouchDesigner"
Cortex calls
restart_processwithprocess_name: "TouchDesigner"and reports the result.
Diagnose issues:
"Why is the machine running slow?"
Cortex calls
get_system_info,get_running_processes, andget_event_logsto identify resource-heavy processes or recent errors.
Run a command (Tier 3):
"Check the network configuration"
Cortex requests confirmation to run
ipconfig /all, then returns the output.
Tool Tiers¶
Tier 1: Read-Only (Auto-Approved)¶
These tools only read information and never modify anything:
| Tool | Description |
|---|---|
get_system_info |
CPU, memory, disk, GPU, hostname, OS, uptime, agent version |
get_process_list |
All Owlette-configured processes with status |
get_running_processes |
All OS processes with CPU/memory usage (filterable) |
get_network_info |
Network interfaces, IP addresses, link status |
get_disk_usage |
All drives with total/used/free space |
get_event_logs |
Windows event logs (Application, System, Security) |
get_service_status |
Status of any Windows service |
get_agent_config |
Owlette agent configuration (tokens stripped) |
get_agent_logs |
Recent agent log entries (filterable by level) |
get_agent_health |
Connection state, health probe results |
Tier 2: Process Management (Auto-Approved)¶
These wrap existing Owlette commands:
| Tool | Description |
|---|---|
restart_process |
Restart an Owlette-configured process |
kill_process |
Kill/stop a process |
start_process |
Start a stopped process |
set_launch_mode |
Set launch mode (off, always, scheduled) |
capture_screenshot |
Capture a screenshot of the machine's desktop |
Tier 3: Privileged (Requires Confirmation)¶
These tools require you to click Confirm before execution:
| Tool | Description |
|---|---|
run_command |
Execute a shell command (allowlist enforced) |
run_powershell |
Execute a PowerShell command (allowlist enforced) |
run_python |
Execute a Python script on the machine |
read_file |
Read a file on the machine (max 100KB) |
write_file |
Write content to a file |
list_directory |
List directory contents with file sizes and dates |
reboot_machine |
Reboot the Windows machine |
shutdown_machine |
Shut down the Windows machine |
cancel_reboot |
Cancel a scheduled reboot or shutdown |
Full tool reference
See Cortex Tools Reference for complete parameter documentation and allowed command lists.
How It Works¶
User types message
│
▼
POST /api/cortex (streaming)
│
├── Resolve LLM config (user key → site key fallback)
├── Send messages + tool definitions to LLM
│
▼
LLM decides to call a tool
│
├── Tier 1/2: Auto-execute
│ │
│ ├── Write command to Firestore pending queue
│ ├── Poll for completion (1.5s intervals, 30s timeout)
│ └── Return result to LLM
│
└── Tier 3: Request user confirmation
│
├── Dashboard shows confirmation dialog
├── User clicks Confirm/Deny
└── If confirmed: execute and return result
Autonomous Mode¶
Cortex can operate autonomously as a cluster manager — when a process crashes or fails to start, Cortex automatically investigates and attempts remediation without human intervention.
How It Works¶
Agent detects process crash
│
▼
POST /api/agent/alert (existing alert system)
│
├── Email notifications (existing)
├── Webhook notifications (existing)
│
└── Trigger autonomous Cortex (new)
│
▼
POST /api/cortex/autonomous (internal)
│
├── Check: autonomous enabled for site?
├── Check: dedup/cooldown (same crash within 15 min?)
├── Check: concurrency (max 3 active sessions per site)
│
▼
generateText() with tool calling
│
├── Read agent logs → look for errors
├── Check process status → confirm crash
├── Restart process → verify it's running
│
▼
Save conversation + update event record
│
├── Resolved → logged for review
└── Escalated → email admins
The Directive¶
Every autonomous investigation is guided by a directive — a customizable instruction that tells Cortex its mission. The default directive:
Keep all configured processes running and machines operational. When a process crashes, check agent logs and system event logs for errors, restart the process. If a restart fails twice, escalate to site admins.
Custom directives can be set per-site in Firestore (sites/{siteId}/settings/cortex → directive field).
Enabling Autonomous Mode¶
- Set the internal secret: Add
CORTEX_INTERNAL_SECRETenvironment variable in Railway (see Environment Variables) - Configure a site-level LLM key: Autonomous mode uses the site key, not user keys (Cortex Settings → Site Key tab)
- Enable per site in Firestore Console:
- Navigate to
sites/{your-site-id}/settings/cortex - Set
autonomousEnabledtotrue
- Navigate to
Configuration Options¶
| Setting | Default | Description |
|---|---|---|
autonomousEnabled |
false |
Master switch — must be true for autonomous mode |
directive |
(see above) | Custom mission text for the AI |
maxTier |
2 |
Max tool tier (1=read-only, 2=+process management, 3=+shell commands) |
autonomousModel |
null |
Override LLM model (e.g., use a cheaper model for autonomous) |
cooldownMinutes |
15 |
Wait time before re-investigating same machine+process |
maxEventsPerHour |
10 |
Max incoming events processed per hour per site |
escalationEmail |
true |
Email site admins when Cortex escalates |
Guardrails¶
Autonomous Cortex has multiple safety layers:
- Opt-in per site — disabled by default, requires explicit admin action
- Event dedup — same machine+process won't be investigated again within the cooldown window
- Concurrency cap — max 3 simultaneous investigations per site
- Step limit — max 15 tool-calling rounds per investigation
- Restart limit — system prompt instructs max 2 restart attempts before escalating
- Tier restriction — default Tier 2 (no shell commands unless admin overrides)
- Offline detection — if the machine is offline, Cortex immediately escalates instead of wasting LLM calls
Reviewing Autonomous Actions¶
Autonomous conversations appear in the Cortex sidebar with a ⚡ auto badge. Click to view exactly what Cortex investigated, which tools it called, and what actions it took.
Event records are stored in Firestore at sites/{siteId}/cortex-events/ with full audit trails including tool calls, timestamps, and outcome summaries.
Escalation¶
When Cortex can't resolve an issue (restart fails, unexpected errors, machine offline), it:
- Marks the event as "escalated"
- Sends an escalation email to site admins with:
- What happened (process name, machine, error)
- What Cortex investigated and attempted
- Why it's escalating
- Logs the full conversation for review in the Cortex UI
Security¶
- Allowlisted commands:
run_commandandrun_powershellonly allow specific commands (e.g.,ipconfig,systeminfo,Get-Process) - File size limits:
read_filelimited to 100KB - Machine online check: Cortex verifies the machine is online before executing
- Encrypted API keys: LLM keys are encrypted at rest in Firestore
- No key exposure: API keys never leave the server — streaming happens server-side
- Autonomous auth: Internal endpoint authenticated by shared secret, not exposed to public