Configuration#

Nixi can be configured via CLI flags, environment variables, or a config file.

CLI Flags#

--url      LLM server URL (default: http://localhost:11434)
--model    Model name (default: qwen3:30b-a3b)
--api      API type: auto, ollama, openai (default: auto)
--version  Print version and exit

Environment Variables#

VariableDescriptionDefault
NIXI_LLM_URLLLM server URLhttp://localhost:11434
NIXI_MODELModel nameqwen3:30b-a3b
NIXI_APIAPI type (auto/ollama/openai)auto
NIXI_CONTEXT_SIZEContext window size32768
NIXI_DATA_DIRData directory~/.local/share/nixi/
NIXI_THEMETUI theme (dark/light)dark

Config File#

The install script writes a config to ~/.config/nixi/config.toml:

[llm]
url = "http://localhost:11434"
model = "qwen3:30b-a3b"
api_type = "auto"
context_size = 32768

[paths]
data_dir = "~/.local/share/nixi/"

[ui]
theme = "dark"

API Auto-Detection#

Nixi auto-detects whether your LLM server speaks Ollama or OpenAI protocol:

  • Port 11434 – assumed Ollama
  • Port 1234 – assumed OpenAI (LM Studio)
  • Otherwise – probes /api/tags (Ollama) and /v1/models (OpenAI)

Supported servers: Ollama, LM Studio, llama.cpp, vLLM, and any OpenAI-compatible API.

Context Size Defaults#

For local models, Nixi sets context size based on model size:

Model sizeContextExample
7B or smaller16Kqwen3:7b
30B32Kqwen3:30b-a3b
70B+64Kqwen3:72b

Remote servers control their own context size.

NixOS Module Options#

When using the NixOS module:

services.nixi = {
  enable = true;
  package = inputs.nixi.packages.x86_64-linux.default;
  user = "nixi";           # default
  group = "nixi";          # default
  llmUrl = "http://localhost:11434";
  model = "qwen3:30b-a3b";
  apiType = "auto";
  contextSize = 32768;
  dataDir = "/var/lib/nixi";
};