Architecture
Architecture#
Nixi is a Go application with a clean separation between the agent core, LLM client, tool system, and UI frontends.
Data Flow#
User Input --> TUI/Web UI --> Agent --> LLM Client --> Ollama/OpenAI API
|
+--> Tool Registry --> Executor --> System
Packages#
cmd/nixi/– entry point, CLI flags, launches TUI or web daemoninternal/agent/– agent loop with tool dispatch, system promptinternal/llm/– dual-protocol streaming client (Ollama + OpenAI)internal/tools/– tool interface, registry, executor abstractioninternal/tui/– Bubble Tea terminal UIinternal/web/– web daemon (coming soon)internal/config/– configuration loading
Agent Loop#
The agent runs up to 10 iterations per user message:
- Send conversation history + tool schemas to the LLM
- Stream the response tokens to the UI
- If the LLM requests tool calls, execute them
- Feed tool results back into the conversation
- Repeat until the LLM responds with text (no tool calls)
Streaming#
All communication uses Go channels:
llm.Client.StreamChat()returns<-chan StreamEvent- The agent collects events, executes tools, and re-emits
agent.StreamEvent - The TUI reads from the agent channel via Bubble Tea commands
- Context cancellation propagates immediately through all layers
Executor Abstraction#
Tools never call os/exec directly. Instead, they use the Executor interface:
LocalExecutor– runs commands viaos/execon the controller nodeSSHExecutor– runs commands on remote nodes via SSH (planned)ResolveExecutor(node)– routes to the correct executor based on thenodeparameter
This means the same tool code works for both local and remote execution.
Networking#
- Same-node containers use Podman DNS (container names)
- Cross-node references use the node’s IP address (Podman DNS is local-only)