LLM Support (OpenAI, Local)
Xio supports multiple large language model providers. You can choose between using OpenAI's hosted models or running a local model through providers like Ollama or Llama.cpp.
In your .env
file, you can select the provider:
envCopyEditLLM_PROVIDER=openai
# or
LLM_PROVIDER=local
When using a local setup, Xio communicates with your running LLM instance (such as Ollama) over HTTP. This allows full offline operation when configured properly.
Local models work best for workflows that require privacy or independence from external services. Hosted models can be used when higher performance or specific capabilities are required.
Last updated