Aegis System Architecture
Overview
Aegis is an autonomous AI agent platform operating within a contained LXC environment. The system combines multi-tier memory, intelligent routing across LLM providers, and multi-channel communication to execute tasks autonomously.
flowchart TB
subgraph EXT["External World"]
Discord[Discord]
WhatsApp[WhatsApp]
Voice[Voice]
Email[Email]
Telegram[Telegram]
end
subgraph LXC["AEGIS LXC (10.10.10.103)"]
Traefik["Traefik Proxy
aegis.rbnk.uk"]
subgraph Docker["Docker Compose"]
Dashboard["Dashboard Service
FastAPI + Uvicorn
60+ Endpoints"]
Scheduler["Scheduler Service
APScheduler
18+ Jobs"]
FalkorDB["FalkorDB
Knowledge Graph"]
end
PostgreSQL["PostgreSQL
host.docker.internal
• episodic_memory
• semantic_memory
• workflow_checkpoints
• execution_traces"]
end
EXT --> Traefik
Traefik --> Dashboard
Dashboard <--> PostgreSQL
Scheduler <--> PostgreSQL
Dashboard <--> FalkorDB
Core Components
1. Memory Architecture (6 Tiers)
flowchart TB
subgraph Memory["MEMORY SYSTEM"]
subgraph Tier1["Primary Storage (PostgreSQL)"]
Episodic["Episodic
Events, Decisions,
Interactions"]
Semantic["Semantic
Knowledge, Facts,
Learnings"]
Procedural["Procedural
Workflows, How-tos,
Templates"]
end
subgraph Tier2["Secondary Storage"]
Cache["Cache
(In-Memory)
LLM Responses,
Tool Outputs"]
KnowledgeGraph["Knowledge Graph
(FalkorDB)
Entities,
Relations"]
Hybrid["Hybrid
(Unified Query)
Cross-tier
Retrieval"]
end
subgraph Consolidation["Sleep-Cycle Consolidation"]
Light["Light Phase
Cleanup"]
Deep["Deep Phase
Scoring, Pruning"]
REM["REM Phase
Synthesis,
Pattern Extraction"]
end
end
Tier1 --> Consolidation
Tier2 --> Consolidation
Light --> Deep --> REM
2. LLM Routing (4 Tiers)
flowchart LR
Task[Task] --> Router{LLM Router}
Router -->|Strategic,
Architecture| Opus["Tier 1
Claude Opus
RARE"]
Router -->|Fast Ops,
Classification| Haiku["Tier 1.5
Claude Haiku
HIGH FREQ"]
Router -->|90% Tasks,
Operational| GLM["Tier 2
GLM-4.7 (Z.ai)
~8 req/min"]
Router -->|Fallback,
Offline,
Vision| Ollama["Tier 3
Ollama Local
UNLIMITED"]
subgraph OllamaModels["Local Models"]
llama["llama3.2"]
nomic["nomic-embed-text"]
llava["llava"]
deepseek["deepseek-r1"]
end
Ollama --> OllamaModels
3. Communication Channels
flowchart TB
subgraph Channels["MULTI-CHANNEL COMMUNICATION"]
subgraph Primary["Primary Channels"]
Discord2["Discord
#general, #logs,
#alerts, #journal,
#tasks"]
WhatsApp2["WhatsApp
+447441443388
Commands:
task:, c:, status"]
Voice2["Voice (Vonage)
Inbound Calls
ASR Recognition"]
end
subgraph Secondary["Secondary Channels"]
Telegram2["Telegram
Time-sensitive
Health Alerts"]
Email2["Email
aegis@richardbankole.com
AI Triage"]
end
end
4. Workflow Engine
flowchart TB
subgraph Workflow["WORKFLOW SYSTEM (LangGraph-Inspired)"]
subgraph Nodes["Node Types"]
LLMNode["LLMNode
Tier-aware Inference"]
FunctionNode["FunctionNode
Python Execution"]
ToolNode["ToolNode
MCP Tools"]
InterruptNode["InterruptNode
Human Gate"]
ParallelNode["ParallelNode
Concurrent Exec"]
SubworkflowNode["SubworkflowNode
Nested Flows"]
end
subgraph Edges["Edge Types"]
Static["Edge
(static)"]
Conditional["CondEdge
(conditional)"]
Error["ErrorEdge
(fallback)"]
end
Persistence["PostgreSQL
workflow_checkpoints
(crash recovery)"]
end
Nodes --> Edges
Edges --> Persistence
5. Execution Tracing
flowchart TB
subgraph TraceContext["TraceContext"]
TC_ID["trace_id: UUID"]
TC_WF["workflow_name: string"]
TC_META["metadata: dict"]
end
subgraph Spans["Spans (1:many)"]
subgraph Span["Span"]
S_ID["span_id: UUID"]
S_PARENT["parent_span_id: UUID"]
S_NAME["name: string"]
S_KIND["kind: SpanKind"]
S_IO["inputs/outputs: dict"]
S_MODEL["model: string"]
S_TOKENS["tokens_in/out: int"]
S_DUR["duration_ms: float"]
end
end
subgraph SpanKind["SpanKind (enum)"]
LLM["LLM_CALL"]
TOOL["TOOL_CALL"]
FUNC["FUNCTION"]
NODE["NODE"]
DEC["DECISION"]
end
TraceContext --> Spans
Span --> SpanKind
API Endpoints:
- GET /api/traces/ - List traces
- GET /api/traces/{id} - Get trace detail
- GET /api/traces/stats/summary - Statistics
Module Inventory (48 Packages)
| Category |
Modules |
Purpose |
| Core |
db, config, scheduler |
Database, configuration, job scheduling |
| Memory |
memory (episodic, semantic, procedural, cache, graphiti_client, consolidation) |
Multi-tier memory system |
| LLM |
llm (router, glm, haiku, ollama), gemini |
Model routing and inference |
| Agents |
agents, orchestration, planning, reflection |
Agent templates, coordination, HTN planning |
| Workflows |
workflows (graph, nodes, edges, executor) |
Graph-based workflow engine |
| Tracing |
tracing |
Execution observability |
| Communication |
discord, telegram, whatsapp, integrations/vonage |
Multi-channel messaging |
| Email |
email (triage, processor, scheduled) |
Gmail integration |
| APIs |
research, code, meeting, sentiment, content |
API product implementations |
| Monetization |
monetization, billing, marketplace, revenue |
Business logic |
| Monitoring |
monitor, infra (anomaly detection) |
Service and infrastructure monitoring |
| Dashboard |
dashboard (31 route files, templates) |
Web UI and API endpoints |
Scheduled Jobs (18+)
gantt
title Daily Operating Schedule (UTC)
dateFormat HH:mm
axisFormat %H:%M
section Maintenance
Memory Consolidation: 02:00, 1h
Transcript Digestion: 03:00, 1h
section Morning
Morning Status: 06:00, 30m
section Active Work
Health Checks (5min): 08:00, 14h
Email Triage (hourly): 08:00, 14h
Monitor Check (30min): 08:00, 14h
section Evening
Evening Summary: 22:00, 30m
| Schedule |
Job |
Purpose |
| Every 5min |
health_check, docker_health |
System health monitoring |
| Every 15min |
email_scheduled_send |
Queued email delivery |
| Every 30min |
discord_task_poll, monitor_check |
Task reception, website monitoring |
| Hourly |
email_triage |
Inbox classification |
| 02:00 UTC |
memory_consolidation |
Sleep-cycle memory optimization |
| 03:00 UTC |
transcript_digestion |
Claude history → knowledge graph |
| 06:00 UTC |
morning_status |
Daily briefing generation |
| 22:00 UTC |
evening_summary |
Daily digest with DigestGenerator |
API Endpoint Summary (60+)
| Route Category |
Count |
Key Endpoints |
| System |
5 |
/health, /api/status, /api/memory/recent |
| LLM |
9 |
/api/llm/query, /api/ollama/chat, /api/llm/analyze |
| Research |
7 |
/api/research/create, /api/research/synthesize |
| Email |
11 |
/api/email/triage, /api/email/draft |
| Code |
4 |
/api/code/review, /api/code/analyze |
| Monitor |
17 |
/api/monitor/check, /api/monitor/alerts |
| Vonage |
10 |
/api/vonage/whatsapp/, /api/vonage/voice/ |
| Tracing |
5 |
/api/traces/, /api/traces/stats/summary |
| Tasks |
10 |
/api/scheduled-tasks/queue |
| Billing |
7 |
/api/billing/webhook |
Infrastructure
flowchart LR
subgraph External["External"]
Internet((Internet))
Proxmox["Proxmox
157.180.63.15"]
end
subgraph Dockerhost["Dockerhost (10.10.10.10)"]
TraefikMain["Traefik
(TCP Passthrough)"]
end
subgraph Aegis["Aegis LXC (10.10.10.103)"]
TraefikProxy["Traefik Proxy"]
DashboardSvc["Dashboard
:8080"]
SchedulerSvc["Scheduler"]
FalkorSvc["FalkorDB
:6379"]
end
subgraph Host["Host Services"]
Postgres["PostgreSQL
:5432"]
OllamaSvc["Ollama
:11434"]
end
Internet --> Proxmox --> TraefikMain --> TraefikProxy
TraefikProxy --> DashboardSvc
DashboardSvc <--> Postgres
DashboardSvc <--> OllamaSvc
DashboardSvc <--> FalkorSvc
| Component |
Location |
Purpose |
| Dashboard |
Docker: aegis-dashboard:8080 |
FastAPI application |
| Scheduler |
Docker: aegis-scheduler |
APScheduler daemon |
| FalkorDB |
Docker: falkordb:6379 |
Knowledge graph |
| PostgreSQL |
host.docker.internal:5432 |
Primary database |
| Ollama |
localhost:11434 |
Local LLM inference |
| Traefik |
External (dockerhost) |
Reverse proxy |
Generated: 2026-01-04
Version: 2.0 (Mermaid)