
Fredy Acuna / May 4, 2026 / 12 min read
Engram is a persistent memory system for AI agents that connect over MCP (Claude Code, Cursor, Codex, etc.). Locally it stores everything in SQLite, and optionally you can replicate your memory to a cloud server to access it from any machine or share it across projects.
In this guide we'll deploy Engram Cloud on your own VPS using Dokploy, with real authentication (no insecure mode), automatic HTTPS, and a full web dashboard. This guide is born out of doing it in production and solving EVERY error that came up along the way.
Before you start, make sure you have:
engram.yourdomain.com)Engram is an agent-agnostic Go binary. It runs in several modes:
~/.engram/ (SQLite)mem_save, mem_search, etc.) to your AI agentKey principle: the local SQLite is always the source of truth. Cloud acts as a replicated index, NOT primary storage. If your cloud goes down, you keep working locally without losing anything.
Each project in Engram is a fully isolated namespace. If you work on 10 projects, each has its own memory, observations, and sessions. They share nothing.
The project name resolves from the MCP server's cwd, not from what the LLM passes. If you open Claude Code in ~/projects/foo, that's project foo. Open it in ~/projects/bar, it's bar. Separate memory.
Instead of pasting docker-compose.yml directly into Dokploy, we'll create a private GitHub repo with the configuration. Dokploy will clone it and deploy from there.
Create a new directory:
mkdir engram-deploy && cd engram-deploy
Create the docker-compose.yml:
services:
postgres:
image: postgres:16-alpine
container_name: engram-cloud-postgres
restart: unless-stopped
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}']
interval: 5s
timeout: 3s
retries: 10
volumes:
- engram-cloud-pg:/var/lib/postgresql/data
cloud:
image: ghcr.io/gentleman-programming/engram:${ENGRAM_VERSION}
container_name: engram-cloud
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
environment:
ENGRAM_DATABASE_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}?sslmode=disable
ENGRAM_JWT_SECRET: ${ENGRAM_JWT_SECRET}
ENGRAM_CLOUD_TOKEN: ${ENGRAM_CLOUD_TOKEN}
ENGRAM_CLOUD_INSECURE_NO_AUTH: '0'
ENGRAM_CLOUD_ALLOWED_PROJECTS: ${ENGRAM_CLOUD_ALLOWED_PROJECTS}
ENGRAM_CLOUD_HOST: 0.0.0.0
ENGRAM_PORT: '18080'
expose:
- '18080'
command: ['cloud', 'serve']
volumes:
engram-cloud-pg:
And a .gitignore so you don't commit secrets:
.env
.env.local
*.local
Compose decisions explained:
- Pre-published official image:
ghcr.io/gentleman-programming/engramalready ships withcloud serveas the default CMD. No building — Dokploy doesdocker pulland starts in seconds.- Version pinned via
ENGRAM_VERSION: never uselatestin production. When you bump it manually, you know exactly what's changing.- Postgres NOT exposed to the host: we removed the
ports:mapping from the upstream compose. The DB is only reachable from the compose's internal network. Safer.expose: 18080(no host port): Traefik (bundled with Dokploy) grabs the service from the internal network and adds HTTPS. We don't expose ports directly to the public VPS.ENGRAM_CLOUD_INSECURE_NO_AUTH: '0': the upstream compose ships'1'for local dev. Make sure it's'0'in production.
Push to a private GitHub repo:
git init -b main
git add -A
git commit -m "feat: initial engram cloud deploy compose"
gh repo create <your-username>/engram-deploy --private --source=. --push
Engram Cloud needs three distinct secrets. Each plays a different role:
| Variable | Role | Who sees it |
|---|---|---|
POSTGRES_PASSWORD | Postgres user password | Server only |
ENGRAM_JWT_SECRET | HMAC key for signing internal tokens | Server only |
ENGRAM_CLOUD_TOKEN | Shared bearer token (client sends it in headers) | Server AND client |
Important: each one needs a different value. Reusing the same secret for two different roles is a terrible practice.
# POSTGRES_PASSWORD — MUST be URL-safe (it goes inside a postgres:// URL)
openssl rand -hex 32
# ENGRAM_JWT_SECRET — only lives in env vars, base64 is fine
openssl rand -base64 48
# ENGRAM_CLOUD_TOKEN — only travels in headers, base64 is fine
openssl rand -base64 48
Why hex for postgres?
The password gets injected into
ENGRAM_DATABASE_URL. If it has URL-reserved characters (:,/,@,?,#,+), it breaks the parser and you'll see crypticinvalid porterrors that have NOTHING to do with the actual port. Hex ([0-9a-f]) is fully URL-safe and avoids the trap entirely.
engram-cloud)maindocker-compose.ymlIn the Environment tab of your service, paste this and replace the <...> placeholders with the values you just generated:
ENGRAM_VERSION=v1.15.7
POSTGRES_USER=engram
POSTGRES_DB=engram_cloud
POSTGRES_PASSWORD=<output of openssl rand -hex 32>
ENGRAM_JWT_SECRET=<output of openssl rand -base64 48>
ENGRAM_CLOUD_TOKEN=<output of openssl rand -base64 48>
ENGRAM_CLOUD_ALLOWED_PROJECTS=personal
About ENGRAM_CLOUD_ALLOWED_PROJECTS: this is a server-side whitelist of which projects the cloud can accept. Start with personal or the project name where you'll use Engram first. To add more later, append them comma-separated and redeploy:
ENGRAM_CLOUD_ALLOWED_PROJECTS=personal,blog,work,experiments
Important: redeploying in Dokploy with a published image is NOT a build + long downtime. It's just
docker pull(already cached) and a container restart with the new env vars. 2 to 5 seconds of downtime, during which your client keeps working with the local SQLite without losing anything. Sync resumes automatically when it comes back.
In the Domains tab of your service:
cloudengram.yourdomain.com18080/Before deploying: make sure the DNS for
engram.yourdomain.comalready points to your VPS IP. If Let's Encrypt can't validate the domain, the deploy will succeed but without TLS.
Click Deploy. Within a minute you should see:
engram-cloud-postgres: Up (healthy)engram-cloud: UpCheck the engram-cloud logs. If it starts cleanly you'll see something like:
cloud serve listening on 0.0.0.0:18080
Open https://engram.yourdomain.com/dashboard/login in your browser. Paste your ENGRAM_CLOUD_TOKEN. Login. Done — you're in the dashboard.
Now let's point your local Engram client at the server.
engram version
You need at least v1.15.x (older versions don't have cloud commands). If it says engram dev or engram vdev, that's a development build without cloud features. Reinstall with a specific tag:
go install github.com/Gentleman-Programming/engram/cmd/engram@v1.15.7
The key detail: the
@v1.15.7is MANDATORY. Without a tag, Go builds from main as a versionless dev build. With the tag, you get the binary with the proper version.
Verify the version is right now:
engram version
# should say: engram 1.15.7
engram --help | grep cloud
# should list the 'cloud' subcommand
Add this to your ~/.bashrc or ~/.zshrc (whichever you actually use — check with echo $SHELL):
export ENGRAM_CLOUD_TOKEN='<the same token you set in Dokploy>'
Reload your shell (source ~/.bashrc or open a new terminal).
engram cloud config --server https://engram.yourdomain.com
engram cloud status
You should see:
Cloud status: configured (target=cloud)
Server: https://engram.yourdomain.com
Auth status: ready (token provided via runtime cloud config)
Sync readiness: ready for explicit --project sync (project must be enrolled)
If it says Auth status: token not configured, your shell isn't reading the env var. Check echo "${ENGRAM_CLOUD_TOKEN:0:6}..." (it should print the first 6 chars).
cd ~/path/to/the-project-you-want-to-sync
engram cloud enroll personal # only the first time
engram sync --cloud --project personal
Reload the dashboard at https://engram.yourdomain.com/dashboard. The 0 / 0 / 0 now show real numbers: the project, you as contributor, and the total chunks synced.
Engram Cloud ships a complete dashboard. Without the admin token, you'll see:
| Path | Purpose |
|---|---|
/dashboard | Landing |
/dashboard/stats | General metrics |
/dashboard/activity | Recent activity |
/dashboard/projects | Your project list |
/dashboard/projects/{name} | Project detail: observations, sessions, prompts |
/dashboard/browser/observations | Browse ALL your observations |
/dashboard/browser/sessions | Browse sessions |
/dashboard/browser/prompts | Prompt history |
/dashboard/contributors | Who contributed what (useful for teams) |
For admin features (pause/resume sync per project, audit log), generate another token with openssl rand -base64 48 and add ENGRAM_CLOUD_ADMIN=<token> as a Dokploy env var. Then access /dashboard/admin/*.
These are the errors you'll hit in roughly this order. I had them all.
cloud auth token is required: set ENGRAM_CLOUD_TOKENCause: missing ENGRAM_CLOUD_TOKEN in the server env vars. The upstream compose ships ENGRAM_CLOUD_INSECURE_NO_AUTH=1 (local dev mode without auth). When moving to production, you need to add the token.
Fix: add ENGRAM_CLOUD_TOKEN to Dokploy Environment and redeploy.
cannot parse ... invalid port ":XXXXX" after hostCause: your POSTGRES_PASSWORD contains URL-reserved characters (:, /, +, @). The postgres URL parser gets confused and thinks part of the password is a port.
Fix: regenerate the password with openssl rand -hex 32 (alphabet [0-9a-f], fully URL-safe).
password authentication failed for user "engram" (after changing the password)Cause: you changed POSTGRES_PASSWORD in Dokploy but the postgres volume was already initialized with the old password. The official postgres image only applies POSTGRES_PASSWORD the FIRST time it creates the data dir. Changing it later does NOT update the existing user.
Fix: delete the postgres volume and redeploy. Since the cloud never managed to store any of your data, you lose nothing.
volume is in use when running docker volume rmCause: you hit "Stop" in Dokploy, but "Stop" only halts containers — it doesn't remove them. Stopped containers still hold references to their volumes.
Fix:
# Find the stopped container
docker ps -a | grep engram
# Remove it (not just stop)
docker rm <container-id>
# Now you can
docker volume rm <real-volume-name>
Note: the real volume name has a Dokploy prefix. Run
docker volume ls | grep engramto find the exact name, something like<projectid>_engram-cloud-pg.
If it's easier: delete the entire app in Dokploy and recreate it. Wipes everything in one shot.
engram dev (client without cloud commands)Cause: you installed the binary with go install ...@latest or no tag at all. That builds from main as a development build, which doesn't include the cloud commands if your Go cache picked a commit before the feature landed.
Fix:
go install github.com/Gentleman-Programming/engram/cmd/engram@v1.15.7
goenv rehash # only if you use goenv
engram version # must say 1.15.7, NOT dev
If you set ENGRAM_CLOUD_TOKEN in ~/.zshrc but engram cloud status keeps saying token not configured, check your shell:
echo $SHELL
If it says /bin/bash, .zshrc is never loaded. You need to put the export in ~/.bashrc instead.
For serious deployments:
ENGRAM_CLOUD_TOKEN has leaked, regenerate both (server and client) and redeploy. Anyone with that token has read/write access to ALL whitelisted projects.engram-cloud-pg volume (Dokploy integrates with S3-compatible storage).ENGRAM_CLOUD_ALLOWED_PROJECTS as defense in depth: even if the token leaks, only the projects in the whitelist can be synced. Keep the list tight and specific..gitignore with .env is mandatory. Secrets live only in Dokploy → Environment.You now have your own persistent memory infrastructure for AI agents running on your VPS:
And the most important part: your memory belongs to you. It doesn't depend on an external cloud service. If Engram disappeared tomorrow, you'd still have everything locally in SQLite plus a postgres replica on your VPS.