Keep Blog

LLM-Wiki

LLM-Wiki

Chatting with Obsidian, Hermes Agent, and Keep

Flows

Flows

Code Mode for Agent Memory

Benchmarking Keep with LoCoMo

Benchmarking Keep with LoCoMo

76.2% on LoCoMo with local embedding/summarization models

Reflection and Memory

Reflection and Memory

LLM and memory as mirrors

Introducing Keep

Introducing Keep

Reflective Memory for AI Agents

Wisdom, or Prompt-Engineering?

Wisdom, or Prompt-Engineering?

When the Singularity happened, we were sitting on a park bench in Berkeley.