Back to blog

How to give your AI agent permanent memory

A technical overview of how OpenClaw implements persistent memory for personal AI agents: conversation history, semantic search, preference storage, and practical tips for building an agent that genuinely knows you.

K-Claw Team·December 20, 2025·4 min read

Why memory changes everything

The difference between a useful AI assistant and an exceptional one is memory. Without memory, every conversation starts from zero — you re-explain your context, your preferences, your projects. With memory, your agent accumulates understanding over weeks and months, becoming progressively more useful without any additional effort from you.

ChatGPT offers limited memory features managed by OpenAI. With a self-hosted OpenClaw agent, you control the entire memory system: what gets stored, how it's searched, how long it's retained, and who can access it. Your memory database stays on your server.

The three layers of agent memory

OpenClaw implements memory as three distinct systems, each serving a different purpose:

1. Conversation history (short-term + long-term)

Every message you exchange with your agent is stored in a PostgreSQL database on your server. When you start a new conversation, the agent retrieves recent messages from the same session as context. For longer gaps (days or weeks), it retrieves a summary of previous relevant conversations.

This is analogous to how you might brief a human assistant: "When we last talked about the project, we decided X." Your agent has access to the actual conversation record, not a summary it made up.

2. Semantic memory (vector search)

As conversations accumulate, searching through raw text becomes impractical. OpenClaw optionally uses vector embeddings — mathematical representations of meaning — to enable semantic search across your entire conversation history.

This means you can ask: "What did I say about my pricing strategy?" and the agent finds relevant mentions even if you never used those exact words. The embeddings are generated locally (using a small embedding model) and stored in your database — no external services required.

3. Explicit memory store (preferences and facts)

The third layer is the most actionable: explicit facts you tell the agent to remember. When you say "remember: I prefer code examples in TypeScript, not JavaScript," the agent stores this as a discrete memory entry that's prepended to relevant future conversations.

You can view and manage these stored facts via the /memory command, edit them if they change, and delete outdated entries. This is the memory layer you actively curate — the others accumulate passively.

Teaching your agent what to remember

The explicit memory store is most powerful when you invest a few minutes at the start of using your agent to establish foundational context. Good things to capture:

  • Professional context: Your role, industry, tech stack, team size
  • Communication preferences: Response length, tone, formatting preferences
  • Standing rules: "Always suggest alternatives when you disagree with my approach"
  • Key relationships: Names and context for people you'll mention frequently
  • Active projects: Brief descriptions of what you're working on

As your situation evolves, update the memory: "The client project I mentioned is now complete. Remove that from your active projects memory."

Conversation context window management

Large language models have a limited context window — the amount of text they can "see" at once. Long conversation histories can exceed this limit. OpenClaw handles this through automatic summarization: when a conversation exceeds a configurable threshold, the older portion is compressed into a summary that takes fewer tokens while preserving the key facts.

This process is transparent: you don't notice it happening, but your agent continues to reference information from early in long conversations without running out of context space.

Memory backup and export

Since your memory database is a standard PostgreSQL instance on your VPS, backing it up is straightforward:

pg_dump openclaw_db > openclaw-backup-$(date +%Y%m%d).sql

Add this to a daily cron job and sync the output to cloud storage (rclone to S3, Backblaze B2, etc.) for off-site backup. If you ever migrate to a new VPS, restore with:

psql openclaw_db < openclaw-backup-20260101.sql

Your entire conversation history and memory follow your agent wherever it runs.

Privacy implications of agent memory

Permanent memory is a double-edged feature. The agent becomes more useful because it knows more about you — but it also means sensitive information persists longer. A few practical guidelines:

  • Run regular reviews of your explicit memory store and remove anything that no longer needs to be there
  • For highly sensitive conversations (medical, legal, financial), consider starting them with "Don't store this conversation in long-term memory" — OpenClaw supports conversation-level memory flags
  • Keep your VPS firewall properly configured — the database should never be accessible from the internet, only from localhost

The k-claw courses include a dedicated module on memory management, covering both the technical configuration and the practical habits that make long-term agent memory genuinely useful rather than a liability.

Ready to follow along? Install OpenClaw now.

k-claw's guided courses walk you through every step. The automated installer does the heavy lifting.

Get started