Skip to content

Chat

devrecall chat is a REPL over your work memory. Ask questions like “what was that auth bug I fixed in February?” and get an answer grounded in your actual commits, threads, and meetings.

Terminal window
devrecall chat
DevRecall Chat ask anything about your work history.
Type /help for commands, /quit to exit.
> what did I work on last week?

Chat needs an LLM configured. See Configure.

PatternExample
Time-window summaries”what did I do in Q1?”
Specific recall”what was that auth bug I fixed in February?”
Person-scoped”what did I discuss with Sarah last week?”
Decision recall”what did we decide about caching?”
Project-scoped”summarize my work on the payment rewrite”
Metrics”how many PRs did I review last quarter?”

Chat is an agent loop. The LLM is given a small catalogue of read-only tools over your local SQLite database and decides which to call:

  • current_time — anchor relative dates (“yesterday”, “Q1”)
  • list_activities / count_activities — filter by date, source, type, identity
  • search_activities — FTS5 keyword search
  • semantic_search_activities — vector search via local ONNX embeddings
  • get_activity — fetch the full body of one activity
  • list_summaries / get_summary — read pre-computed periodic summaries
  • list_identities / resolve_person — look up people

The model can call multiple tools per question and chain them. You can see exactly what it called with /trace after an answer.

See How it works for the full architecture.

/help Show available commands
/quit Exit (also /exit)
/clear Clear conversation history
/search <query> Raw FTS5 keyword search (no LLM, just hits)
/trace Show the tool calls the agent made for the last answer
/stats Memory stats — activity count, date range
/sync Force re-sync of every wired source

Chat remembers within a session — follow-ups work:

> what did I work on last week?
[answer]
> tell me more about the payment thing
[knows what "payment thing" refers to]

History is ephemeral. It’s not written to disk and is dropped on /quit or /clear.

Chat itself requires an LLM. But devrecall search "auth token" works without one — pure FTS5 keyword search over your activities, no generation.

When you use BYOK (OpenAI / Anthropic), the retrieved context is sent directly from your machine to the LLM provider — not through any DevRecall server. With local Ollama, nothing leaves your machine at all.

DevRecall’s relay is never in the path for chat or summarization.