Architecture#

Nixi is a Go application with a clean separation between the agent core, LLM client, tool system, and UI frontends.

Data Flow#

User Input --> TUI/Web UI --> Agent --> LLM Client --> Ollama/OpenAI API
                                |
                                +--> Tool Registry --> Executor --> System

Packages#

  • cmd/nixi/ – entry point, CLI flags, launches TUI or web daemon
  • internal/agent/ – agent loop with tool dispatch, system prompt
  • internal/llm/ – dual-protocol streaming client (Ollama + OpenAI)
  • internal/tools/ – tool interface, registry, executor abstraction
  • internal/tui/ – Bubble Tea terminal UI
  • internal/web/ – web daemon (coming soon)
  • internal/config/ – configuration loading

Agent Loop#

The agent runs up to 10 iterations per user message:

  1. Send conversation history + tool schemas to the LLM
  2. Stream the response tokens to the UI
  3. If the LLM requests tool calls, execute them
  4. Feed tool results back into the conversation
  5. Repeat until the LLM responds with text (no tool calls)

Streaming#

All communication uses Go channels:

  • llm.Client.StreamChat() returns <-chan StreamEvent
  • The agent collects events, executes tools, and re-emits agent.StreamEvent
  • The TUI reads from the agent channel via Bubble Tea commands
  • Context cancellation propagates immediately through all layers

Executor Abstraction#

Tools never call os/exec directly. Instead, they use the Executor interface:

  • LocalExecutor – runs commands via os/exec on the controller node
  • SSHExecutor – runs commands on remote nodes via SSH (planned)
  • ResolveExecutor(node) – routes to the correct executor based on the node parameter

This means the same tool code works for both local and remote execution.

Networking#

  • Same-node containers use Podman DNS (container names)
  • Cross-node references use the node’s IP address (Podman DNS is local-only)