Skip to content

Configuration

DevRecall reads ~/.devrecall/config.json. You can edit it directly, or use:

  • devrecall config init — write a default config
  • devrecall config show — print the current config

Tokens and API keys are not in this file — they live in ~/.devrecall/tokens/ (file-mode 0600) or the OS keychain.

{
"git": { ... },
"slack": { ... },
"calendar": { ... },
"github": { ... },
"gitlab": { ... },
"bitbucket": { ... },
"jira": { ... },
"confluence": { ... },
"linear": { ... },
"llm": { ... },
"embedding": { ... },
"privacy": { ... },
"chat": { ... },
"server": { ... },
"token_storage": "file"
}
{
"enabled": true,
"scan_paths": ["~/Projects", "~/work"],
"repos": ["~/work/backend-api"],
"emails": ["pavel@company.com", "pavel.piliak@gmail.com"]
}
FieldNotes
scan_pathsWalked recursively for .git directories
reposExplicit repo paths (skip auto-discovery for these)
emailsAuthor emails that count as “you” (auto-detected if empty)
{ "enabled": true, "team_id": "T0123", "team_name": "ACME" }

Connect via devrecall auth slack. To connect multiple workspaces, run the command once per workspace.

{ "enabled": true, "email": "pavel@company.com" }

Connect via devrecall auth google.

{ "enabled": true, "username": "pavelpilyak", "auth_mode": "oauth" }

auth_mode is one of oauth, pat, gh-cli.

{
"enabled": true,
"base_url": "https://gitlab.example.com",
"username": "pavel"
}

base_url is optional for cloud (defaults to gitlab.com / bitbucket.org).

{
"enabled": true,
"base_url": "https://acme.atlassian.net",
"auth_mode": "oauth",
"email": "pavel@company.com"
}

auth_mode is oauth or api-token (api-key for Linear). email is required for token-based Jira auth.

{
"llm": {
"provider": "ollama",
"model": "gemma4",
"base_url": "http://localhost:11434"
}
}
FieldValues
providerollama, openai, anthropic
modelProvider-specific model name
base_urlOptional — for OpenAI-compatible endpoints (Groq, vLLM, etc.)

Per-task model routing and fallback chains are described in LLM strategy.

{
"embedding": {
"provider": "onnx"
}
}

Default is onnx — bundled all-MiniLM-L6-v2 (384 dimensions), zero external dependencies. Override to ollama (all-minilm) or openai (text-embedding-3-small) if you want a different model.

FieldValues
provideronnx (default), ollama, openai
modelProvider-specific model name; sensible default per provider
base_urlOptional — override Ollama / OpenAI-compatible endpoint
dimensionsVector size; 0 = model default
{
"privacy": {
"slack": "summary",
"linear": "full",
"jira": "metadata"
}
}

Per-source storage mode. Values: full, summary, metadata. See Privacy model.

{
"chat": {
"sync_freshness": {
"default_ttl": "3h",
"per_source": { "slack": "1h", "calendar": "30m" },
"wait": "10s"
}
}
}

Controls how chat refreshes data before answering. sync_freshness.disabled: true skips the pre-chat sync entirely.

{ "server": { "port": 3725 } }

Local HTTP API port. Default 3725 (“DRCL” on a phone keypad). Bound to 127.0.0.1 only.

{ "token_storage": "file" }

file (default) writes tokens to ~/.devrecall/tokens/ with 0600 permissions. keychain stores them in the OS keychain (macOS Keychain on macOS, Secret Service on Linux).

PurposePath
Config~/.devrecall/config.json
Database~/.devrecall/devrecall.db
Tokens (file mode)~/.devrecall/tokens/<source>.json
Prompt templates~/.devrecall/prompts/*.tmpl
Update cache~/.devrecall/version_check.json