A FastAPI service that synchronizes models from Ollama or other LiteLLM/OpenAI-compatible servers into a LiteLLM proxy. It periodically scans upstream sources for models, persists them to a database, and registers them with LiteLLM using the admin API. Includes a web UI for provider management, model editing, and monitoring.
- Database-driven provider management with SQLite persistence
- Provider prefixes (e.g.,
mks-ollama/qwen3:8b) for namespace organization - Ollama mode configuration (native ollama format vs OpenAI-compatible)
- Model parameter editing with user override preservation across syncs
- Orphaned model detection - highlights models no longer available in source
- Per-model actions: Refresh from source, Push to LiteLLM, Edit parameters
- Configurable sources (Ollama or LiteLLM/OpenAI compatible)
- Periodic sync job that registers upstream models with LiteLLM
- Manual sync trigger from the UI
- Web UI for browsing models with database persistence
-
Install dependencies
pip install -e . -
Run the server
PORT=8000 litellm-updater # or PORT=8000 uvicorn litellm_updater.web:create_app --port $PORT
The server defaults to
http://0.0.0.0:8000. -
Configure providers and LiteLLM destination
- Navigate to
http://localhost:8000/adminto set the LiteLLM base URL, update the sync interval, or manage providers. - NEW: If you have existing sources in
config.json, use the "Migrate from config.json" button to move them to the database. - Add new providers with optional prefix and Ollama mode configuration.
- A default
data/config.jsonis generated on first run with automatic sync disabled and LiteLLM destination athttp://localhost:4000.
- Navigate to
-
Trigger sync
- The scheduler runs automatically only when the interval is greater than zero.
- Use the "Run sync now" button on the overview or models page to trigger a manual sync.
-
Build the image directly:
docker build -t litellm-updater . docker run --rm -e PORT=8000 -p 8000:8000 -v $(pwd)/data:/app/data litellm-updater
-
Or use Docker Compose with the provided
example.env(copy or override values as needed):cp example.env .env docker-compose --env-file .env up --build
The compose file binds the UI to
${PORT:-8000}for both the host and container, and mounts the localdata/directory so configuration persists across restarts. Anenv-synchelper service runs before the app to append any new variables from the container'senv.exampleinto your local.envwithout overwriting existing values. The stack also includes awatchtowercontainer that checks for image updates every${WATCHTOWER_POLL_INTERVAL:-60}seconds (configurable via.env) and only acts on services labeled for updates.
- Create a virtual environment if you do not already have one:
python -m venv .venv source .venv/bin/activate - Install dev dependencies so
pytest-asynciois available:pip install -e ".[dev]" - Copy
tests/example.envtotests/.envand point the values at live, reachable endpoints (URLs must include the scheme):cp tests/example.env tests/.env # edit tests/.env to include TEST_OLLAMA_URL / TEST_OPENAI_URL and optional *_KEY values - Run the integration suite; it will automatically load
tests/.envand skip live checks if no endpoints are configured:pytest tests/test_sources_integration.py -q
Database (data/models.db):
- Providers (formerly "sources") are now stored in a SQLite database
- Model metadata, user edits, and orphan status are persisted
- Use
/adminUI to manage providers or migrate from config.json
Config file (data/config.json):
- Now only contains LiteLLM destination and sync interval
- Providers are managed through the database (not config.json)
{
"litellm": {"base_url": "http://localhost:4000", "api_key": null},
"sources": [], // Legacy - use database instead
"sync_interval_seconds": 0
}Providers table:
- id, name, base_url, type, api_key, prefix, default_ollama_mode
Models table:
- id, provider_id, model_id, litellm_params, user_params, is_orphaned, user_modified
- Tracks first_seen, last_seen, orphaned_at timestamps
- JSON fields for capabilities, raw_metadata
- LiteLLM registration is performed via
/model/newendpoint with model name discovered from upstream source - Prefixes are applied to display names (e.g.,
mks-ollama/qwen3:8b) but not to internal model paths - User-edited parameters are preserved across syncs (stored in
user_paramsfield) - Orphaned models (no longer in provider) are highlighted in red in the UI
- Database migrations are handled by Alembic (auto-run on startup)