Custom LLM Providers
Use any OpenAI-compatible AI model with your employees. Connect your own endpoints, use self-hosted models, or choose alternative providers.
Available Providers
By default, employees can use:
- Claude - Anthropic Claude (requires API key)
- Big Pickle - Free option, no API key needed
- Custom Providers - Any OpenAI-compatible endpoint you add
Adding a Custom Provider
Admins can add custom LLM providers:
- Go to Settings → LLM Providers
- Click Add Provider
- Enter the provider details
- Click Save
Provider Configuration
| Field | Description |
|---|---|
| Provider Name | Display name shown in the model chooser |
| Base URL | API endpoint (e.g., https://api.openai.com/v1) |
| Auth Token | API key or bearer token (encrypted at rest) |
| Model Name | Model identifier (e.g., gpt-4, llama-3.1-70b) |
| Display Color | Color indicator to distinguish providers in the UI |
Compatible Services
Works with any OpenAI API-compatible service:
- Ollama - Run models locally
- LM Studio - Local model hosting
- vLLM - High-performance inference
- Together AI - Cloud model hosting
- OpenRouter - Multi-provider routing
- Any OpenAI-format API
Switching Providers
Users can switch between providers:
- New session - Choose provider when starting a conversation
- Mid-conversation - Switch via Chat Options dropdown
- Per-employee - Different employees can use different models
Security
API keys are encrypted at rest and never exposed in the UI. Only admins can manage LLM providers.