Chat
devrecall chat is a REPL over your work memory. Ask questions like
“what was that auth bug I fixed in February?” and get an answer
grounded in your actual commits, threads, and meetings.
Start a session
Section titled “Start a session”devrecall chat
DevRecall Chat — ask anything about your work history.Type /help for commands, /quit to exit.
> what did I work on last week?Chat needs an LLM configured. See Configure.
What it can answer
Section titled “What it can answer”| Pattern | Example |
|---|---|
| Time-window summaries | ”what did I do in Q1?” |
| Specific recall | ”what was that auth bug I fixed in February?” |
| Person-scoped | ”what did I discuss with Sarah last week?” |
| Decision recall | ”what did we decide about caching?” |
| Project-scoped | ”summarize my work on the payment rewrite” |
| Metrics | ”how many PRs did I review last quarter?” |
How the agent answers
Section titled “How the agent answers”Chat is an agent loop. The LLM is given a small catalogue of read-only tools over your local SQLite database and decides which to call:
current_time— anchor relative dates (“yesterday”, “Q1”)list_activities/count_activities— filter by date, source, type, identitysearch_activities— FTS5 keyword searchsemantic_search_activities— vector search via local ONNX embeddingsget_activity— fetch the full body of one activitylist_summaries/get_summary— read pre-computed periodic summarieslist_identities/resolve_person— look up people
The model can call multiple tools per question and chain them. You can
see exactly what it called with /trace after an answer.
See How it works for the full architecture.
Chat commands
Section titled “Chat commands”/help Show available commands/quit Exit (also /exit)/clear Clear conversation history/search <query> Raw FTS5 keyword search (no LLM, just hits)/trace Show the tool calls the agent made for the last answer/stats Memory stats — activity count, date range/sync Force re-sync of every wired sourceConversation memory
Section titled “Conversation memory”Chat remembers within a session — follow-ups work:
> what did I work on last week?[answer]
> tell me more about the payment thing[knows what "payment thing" refers to]History is ephemeral. It’s not written to disk and is dropped on
/quit or /clear.
Without an LLM
Section titled “Without an LLM”Chat itself requires an LLM. But devrecall search "auth token" works
without one — pure FTS5 keyword search over your activities, no
generation.
Privacy in chat
Section titled “Privacy in chat”When you use BYOK (OpenAI / Anthropic), the retrieved context is sent directly from your machine to the LLM provider — not through any DevRecall server. With local Ollama, nothing leaves your machine at all.
DevRecall’s relay is never in the path for chat or summarization.