Configuration
DevRecall reads ~/.devrecall/config.json. You can edit it directly,
or use:
devrecall config init— write a default configdevrecall config show— print the current config
Tokens and API keys are not in this file — they live in
~/.devrecall/tokens/ (file-mode 0600) or the OS keychain.
Top-level shape
Section titled “Top-level shape”{ "git": { ... }, "slack": { ... }, "calendar": { ... }, "github": { ... }, "gitlab": { ... }, "bitbucket": { ... }, "jira": { ... }, "confluence": { ... }, "linear": { ... }, "llm": { ... }, "embedding": { ... }, "privacy": { ... }, "chat": { ... }, "server": { ... }, "token_storage": "file"}Sources
Section titled “Sources”{ "enabled": true, "scan_paths": ["~/Projects", "~/work"], "repos": ["~/work/backend-api"], "emails": ["pavel@company.com", "pavel.piliak@gmail.com"]}| Field | Notes |
|---|---|
scan_paths | Walked recursively for .git directories |
repos | Explicit repo paths (skip auto-discovery for these) |
emails | Author emails that count as “you” (auto-detected if empty) |
{ "enabled": true, "team_id": "T0123", "team_name": "ACME" }Connect via devrecall auth slack. To connect multiple workspaces,
run the command once per workspace.
calendar
Section titled “calendar”{ "enabled": true, "email": "pavel@company.com" }Connect via devrecall auth google.
github
Section titled “github”{ "enabled": true, "username": "pavelpilyak", "auth_mode": "oauth" }auth_mode is one of oauth, pat, gh-cli.
gitlab / bitbucket
Section titled “gitlab / bitbucket”{ "enabled": true, "base_url": "https://gitlab.example.com", "username": "pavel"}base_url is optional for cloud (defaults to gitlab.com /
bitbucket.org).
jira / confluence / linear
Section titled “jira / confluence / linear”{ "enabled": true, "base_url": "https://acme.atlassian.net", "auth_mode": "oauth", "email": "pavel@company.com"}auth_mode is oauth or api-token (api-key for Linear). email
is required for token-based Jira auth.
{ "llm": { "provider": "ollama", "model": "gemma4", "base_url": "http://localhost:11434" }}| Field | Values |
|---|---|
provider | ollama, openai, anthropic |
model | Provider-specific model name |
base_url | Optional — for OpenAI-compatible endpoints (Groq, vLLM, etc.) |
Per-task model routing and fallback chains are described in LLM strategy.
Embeddings
Section titled “Embeddings”{ "embedding": { "provider": "onnx" }}Default is onnx — bundled all-MiniLM-L6-v2 (384 dimensions),
zero external dependencies. Override to ollama (all-minilm) or
openai (text-embedding-3-small) if you want a different model.
| Field | Values |
|---|---|
provider | onnx (default), ollama, openai |
model | Provider-specific model name; sensible default per provider |
base_url | Optional — override Ollama / OpenAI-compatible endpoint |
dimensions | Vector size; 0 = model default |
Privacy
Section titled “Privacy”{ "privacy": { "slack": "summary", "linear": "full", "jira": "metadata" }}Per-source storage mode. Values: full, summary, metadata. See
Privacy model.
{ "chat": { "sync_freshness": { "default_ttl": "3h", "per_source": { "slack": "1h", "calendar": "30m" }, "wait": "10s" } }}Controls how chat refreshes data before answering. sync_freshness.disabled: true
skips the pre-chat sync entirely.
Server
Section titled “Server”{ "server": { "port": 3725 } }Local HTTP API port. Default 3725 (“DRCL” on a phone keypad). Bound
to 127.0.0.1 only.
Token storage
Section titled “Token storage”{ "token_storage": "file" }file (default) writes tokens to ~/.devrecall/tokens/ with 0600
permissions. keychain stores them in the OS keychain (macOS Keychain
on macOS, Secret Service on Linux).
| Purpose | Path |
|---|---|
| Config | ~/.devrecall/config.json |
| Database | ~/.devrecall/devrecall.db |
| Tokens (file mode) | ~/.devrecall/tokens/<source>.json |
| Prompt templates | ~/.devrecall/prompts/*.tmpl |
| Update cache | ~/.devrecall/version_check.json |