Configuration
Configuration#
Nixi can be configured via CLI flags, environment variables, or a config file.
CLI Flags#
--url LLM server URL (default: http://localhost:11434)
--model Model name (default: qwen3:30b-a3b)
--api API type: auto, ollama, openai (default: auto)
--version Print version and exit
Environment Variables#
| Variable | Description | Default |
|---|---|---|
NIXI_LLM_URL | LLM server URL | http://localhost:11434 |
NIXI_MODEL | Model name | qwen3:30b-a3b |
NIXI_API | API type (auto/ollama/openai) | auto |
NIXI_CONTEXT_SIZE | Context window size | 32768 |
NIXI_DATA_DIR | Data directory | ~/.local/share/nixi/ |
NIXI_THEME | TUI theme (dark/light) | dark |
Config File#
The install script writes a config to ~/.config/nixi/config.toml:
[llm]
url = "http://localhost:11434"
model = "qwen3:30b-a3b"
api_type = "auto"
context_size = 32768
[paths]
data_dir = "~/.local/share/nixi/"
[ui]
theme = "dark"
API Auto-Detection#
Nixi auto-detects whether your LLM server speaks Ollama or OpenAI protocol:
- Port
11434– assumed Ollama - Port
1234– assumed OpenAI (LM Studio) - Otherwise – probes
/api/tags(Ollama) and/v1/models(OpenAI)
Supported servers: Ollama, LM Studio, llama.cpp, vLLM, and any OpenAI-compatible API.
Context Size Defaults#
For local models, Nixi sets context size based on model size:
| Model size | Context | Example |
|---|---|---|
| 7B or smaller | 16K | qwen3:7b |
| 30B | 32K | qwen3:30b-a3b |
| 70B+ | 64K | qwen3:72b |
Remote servers control their own context size.
NixOS Module Options#
When using the NixOS module:
services.nixi = {
enable = true;
package = inputs.nixi.packages.x86_64-linux.default;
user = "nixi"; # default
group = "nixi"; # default
llmUrl = "http://localhost:11434";
model = "qwen3:30b-a3b";
apiType = "auto";
contextSize = 32768;
dataDir = "/var/lib/nixi";
};