Install in 10 minutes¶
Lex Custis runs as a self-contained docker-compose stack. The only prerequisite on the host is Docker + Compose v2.
Prerequisites¶
- Linux or macOS (Windows via WSL 2)
- Docker Engine 24+ with the Compose v2 plugin
- Python 3 (used only to generate random secrets)
curl- At least 6 GB RAM free, 15 GB disk free
- If using self-hosted Ollama (
--with-ollama), add ~5 GB for the model
Quickstart¶
git clone https://github.com/vbalagovic/lex-custis.git lex-custis
cd lex-custis
./install.sh --with-ollama
./install.sh is the only command you need. It does all of this:
- Verifies Docker + Compose are available.
- Generates a
.envwith random JWT + HMAC + Postgres secrets. docker compose up -d --buildto start every service.- Polls
/healthuntil the backend is ready. - If
--with-ollama, pulls the default model (~3-5 GB) into theollama_modelsvolume. - Seeds demo data: 2 organisations, sample audit entries, 2 incidents.
- Prints the demo login + URLs.
Roughly 10 minutes on a fresh VM, dominated by the Ollama model pull and embedding-model download.
Flags¶
| Flag | What it does |
|---|---|
--with-ollama |
Starts the optional Ollama service and pulls the model in OLLAMA_MODEL (default llama3.1:8b). Skip if you'll use Mistral. |
--no-seed |
Skip demo data seeding. |
--reset |
Destructive. Tear down volumes and rebuild from scratch. Wipes the existing .env too. |
Choosing an LLM¶
Lex Custis's OSS v0.1 ships two providers:
- Mistral (EU-sovereign, default). Add your API key to
.env: - Ollama (self-hosted). Use
./install.sh --with-ollama. Change the model in.env:
Anthropic / OpenAI / Azure OpenAI are commercial plugins — they live
in a separate commercial repo and aren't in the OSS core. You can wire
them yourself by implementing BaseLLMProvider
(backend/app/services/llm/base.py).
Per-org LLM selection is supported in the data model: each
organisation can pick its own llm_provider + llm_model, overriding
the global default. In v0.1 the UI to change this lives at
/settings (admin-only).
Ports¶
| Service | Port | Purpose |
|---|---|---|
frontend |
3000 | Next.js UI |
backend |
8081 | FastAPI + OpenAPI at /docs |
postgres |
5433 | DB, lexcustis / $POSTGRES_PASSWORD |
redis |
6380 | Rate limits + daily budget counter |
qdrant |
6333 / 6334 | Vector DB REST + gRPC |
ollama |
11434 | Self-hosted LLM endpoint (only with --with-ollama) |
If any of these collide with ports already in use on your host, edit
the ports: map in docker-compose.yml before running install.sh.
First login¶
The installer seeds two demo orgs:
| Org | Admin login | Password |
|---|---|---|
| Acme HR-tech (hiring-AI vendor) | admin@acme-hr.demo |
demo-demo-demo-demo |
| Beta FinTech (lending vendor) | admin@beta-fintech.demo |
demo-demo-demo-demo |
There's also reviewer@acme-hr.demo (non-admin role) for testing
multi-user flows.
Both orgs ship with:
- 3 seeded audit-log entries with a real HMAC chain.
- 1 oversight action recorded on the first entry.
- 1 modified-action on the second entry.
- 1 open incident with an SLA clock ticking.
- 1 reported-to-authority incident for contrast.
What's running¶
Five containers plus optionally Ollama:
backend FastAPI + Alembic migrations (auto-runs on boot)
celery-worker (placeholder, ready for v0.2 PMM batch work)
frontend Next.js 15
postgres Postgres 16-alpine + pgdata volume
redis Redis 7-alpine
qdrant Qdrant 1.13 + qdrant_data volume
ollama (optional) Ollama latest + ollama_models volume
docker compose ps to verify. Logs via
docker compose logs -f backend etc.
Troubleshooting¶
install.sh says Docker not found. Install Docker Engine 24+ and
the Compose v2 plugin. On macOS, Docker Desktop ships both. On Ubuntu:
Backend 500s on first chat. Either no LLM is configured, or the configured provider is unreachable. Check:
and set MISTRAL_API_KEY (then docker compose restart backend) or
run ./install.sh --with-ollama.
Ollama model pull stuck. First-run pulls ~5 GB. If it fails, retry:
Tests fail locally. Tenant-isolation integration tests need
TEST_DATABASE_URL. See CONTRIBUTING.md.
"Secret is a placeholder" at startup. You kept
CHANGE-ME-IN-PRODUCTION in .env. Regenerate:
and replace JWT_SECRET_KEY and AUDIT_HMAC_MASTER_KEY, then
docker compose restart backend.
Stopping / resetting¶
docker compose down # stop containers, keep volumes
docker compose down -v # stop + wipe all data
./install.sh --reset # full nuke + rebuild
Remember: resetting wipes the HMAC chain. Any previously generated dossiers remain valid (the chain is embedded), but the live database starts over.
Next steps¶
- Generate your first Annex IV dossier for a sample period.
- Exercise the Art. 73 incident workflow.
- Browse the architecture to understand the hash chain.