Getting Started
Getting Started#
Install#
The quickest way to get started is the install script:
curl -sSL nixi.sh/install | bash
This will:
- Detect your system architecture and GPU
- Ask if you have an existing LLM server or want to install Ollama locally
- Select the best model for your hardware
- Write a config file and verify everything works
NixOS Flake#
Add Nixi to your flake inputs:
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
nixi.url = "git+https://codeberg.org/ewrogers/nixi";
};
outputs = { nixpkgs, nixi, ... }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
modules = [
nixi.nixosModules.default
{
services.nixi = {
enable = true;
package = nixi.packages.x86_64-linux.default;
llmUrl = "http://localhost:11434";
model = "qwen3:30b-a3b";
};
}
];
};
};
}
Build from Source#
git clone https://codeberg.org/ewrogers/nixi.git
cd nixi
go build -o nixi ./cmd/nixi
Connect to an LLM#
Nixi supports any OpenAI-compatible API. It auto-detects the protocol from the URL.
Ollama (local)#
nixi --url http://localhost:11434 --model qwen3:30b-a3b
LM Studio#
nixi --url http://localhost:1234 --model qwen/qwen3-30b-a3b
Any OpenAI-compatible server#
nixi --url http://your-server:8080 --model your-model --api openai
First Run#
Launch the TUI:
nixi
You will see:
- A header bar with your hostname, architecture, and active model
- A text input at the bottom
- Type a message and press Enter to chat with Nixi
Slash Commands#
Type / to see available commands:
/help– list commands/model <name>– switch models/system– show system info/theme– toggle dark/light theme/clear– clear conversation/quit– exit