Skip to content

escVelocity

10x with AI Agents

Menu
  • Artificial Intelligence
Menu
Hermes-Gbrain-Slack-Whatsapp

Save Slack and Whatsapp Conversations into Hermes + Gbrain Longterm Memory

Posted on May 15, 2026May 15, 2026 by Sudhir Mantena

You converse with your Hermes Agent via Whatsapp & Slack. Here’s how you can save them into longterm memory in Hermes + GBrain.


Resources:

This is an article in the series Hermes + Gbrain. You can find earlier posts here:

  1. Hermes + GBrain: Complete Setup Guide
  2. Hermes + Gbrain + Notion: Complete Setup Guide
  3. Slack & Whatsapp distillations in Hermes + Gbrain: Github Source Code

Hermes is session-based. Every conversation starts fresh.

GBrain gives it long-term memory, but only if you explicitly feed it. By default, your Slack and WhatsApp conversations with Hermes are stored in a local SQLite database (state.db) and never make it into GBrain. 130 sessions sitting there. Zero intelligence extracted.

I fixed this with a nightly distillation pipeline. Every night at 11pm, the day’s conversations are read from state.db, distilled by GPT-4o into decisions / action items / observations / open questions, and written to GBrain as structured knowledge pages. By morning, Hermes has full context on everything discussed the day before.

Here’s exactly how I built it.

The Problem

Hermes stores all conversations in ~/.hermes/state.db. Check yours:

sqlite3 ~/.hermes/state.db "SELECT source, COUNT(*) FROM sessions GROUP BY source;"

You’ll see something like:

slack|75
whatsapp|56
cli|59
cron|388

Those 131 Slack + WhatsApp sessions contain real decisions, action items, trip plans, business ideas — and none of it is in GBrain. Hermes forgets all of it the moment the session ends.

Gotcha: GBrain’s memory.provider in ~/.hermes/config.yaml defaults to blank. This means cross-session memory only exists if you explicitly build it. The session store is not the same as long-term memory.

The Architecture

state.db (Slack + WhatsApp sessions)
↓  collect_conversations.py  [deterministic — no LLM]
digest-YYYY-MM-DD.md
↓  distill_to_gbrain.py  [GPT-4o — judgment layer]
daily-YYYY-MM-DD.md + entity pages
↓  gbrain put <slug>
GBrain long-term memory

Two scripts. One shell wrapper. One cron job.

The pattern is identical to GBrain’s existing recipes (email-to-brain, x-to-brain): deterministic code handles data extraction, LLM handles judgment. Never mix them.

Prerequisites

  • Hermes Agent running on AWS VPS (Ubuntu)
  • GBrain installed at /home/ubuntu/.bun/bin/gbrain
  • OpenAI API key configured in GBrain (gbrain config get openai_api_key)
  • Python at /home/ubuntu/.hermes/hermes-agent/venv/bin/python
  • sqlite3 installed (sudo apt install sqlite3 -y)

Part 1: The Collector

collect_conversations.py reads today’s sessions from state.db and writes a clean markdown digest. No LLM involved — pure SQL + text formatting.

Install at:

~/.hermes/scripts/collect_conversations.py

Key design decisions:

  • Filters to source IN (‘slack’, ‘whatsapp’) only — skips cron noise
  • Strips the Hermes persona boilerplate from system_prompt (it’s 2,000 characters of SOUL.md on every session — not useful)
  • Skips sessions with no real user messages
  • Truncates individual messages at 2,000 characters to handle file uploads
  • Splits JSON content blocks (tool calls) and extracts only readable text

Test it:

/home/ubuntu/.hermes/hermes-agent/venv/bin/python \
      ~/.hermes/scripts/collect_conversations.py \
       --date 2026-05-10 \
       --output /tmp/digest-test.md
cat /tmp/digest-test.md | head -60

Expected output

# Hermes Conversation Digest — 2026-05-10
Total sessions: 7
---
## Session 1: WhatsApp Persona Config
**Source:** SLACK  |  **Started:** 18:26
**[18:26] You:** Hermes ⇔ WhatsApp Persona Config...
**[18:26] Hermes:** Got it. Send the next bit of context when ready.

Part 2: The Distiller

distill_to_gbrain.py reads the digest and extracts structured knowledge using GPT-4o. It writes one daily page + individual entity pages to GBrain.

Install at:

~/.hermes/scripts/distill_to_gbrain.py

Model matters. I initially used gpt-4o-mini. The extractions were thin — 2 generic action items from a day with 25 sessions. Switched to gpt-4o. Night and day difference.

Chunking matters. A busy day can produce 37,000+ character digests. GPT-4o supports 128k context, but I chunk at 40,000 characters at session boundaries to keep each call focused. Results from multiple chunks are merged and deduplicated.

The distillation prompt extracts:

  • Decisions — things resolved or committed to
  • Action items — concrete next steps with owner
  • Observations — insights, learnings, what worked/didn’t
  • Open questions — raised but unresolved
  • Entities — real people and companies with context

Gotcha: LLMs left unconstrained will create entity pages for email addresses, stock ticker symbols, tool names, and groups of people concatenated together. Add strict rules to the prompt:

NEVER create entity entries for:

- Email addresses

- Stock ticker symbols (AAPL, NVDA)

- Tools or platforms mentioned in passing (WhatsApp, Slack, pdfplumber)

- Groups of people combined into one name

- Sudhir himself — he is the user, not an entity

- Vague single names with no surname or role

Gotcha: gbrain import <file> expects a directory, not a file path. Use gbrain put <slug> with stdin instead:

result = subprocess.run([gbrain, “put”, slug], input=page_content, capture_output=True, text=True, env=env)

Test with dry run:

export OPENAI_API_KEY=$(/home/ubuntu/.bun/bin/gbrain config get openai_api_key)
/home/ubuntu/.hermes/hermes-agent/venv/bin/python \
~/.hermes/scripts/distill_to_gbrain.py \
--digest /tmp/digest-test.md \
--date 2026-05-10 \
--dry-run
cat ~/.hermes/distill/pages/raw-2026-05-10.json

Expected output:

{
"date": "2026-05-10",
"decisions":
[
{
"title": "WhatsApp agent configuration confirmed",
"detail": "require_mention: true confirmed to minimise unsolicited responses."
}
],
"action_items":
[
{
"task": "Investigate false-positive safety block in daily morning brief job",
"owner": "Hermes",
"context": "Job was blocked, not a WhatsApp delivery failure."
}
],
...
}

Part 3: The Nightly Pipeline

nightly_distill.sh orchestrates the full flow: collect → distill → gbrain put.

Install it using:

~/.hermes/scripts/nightly_distill.sh

It automatically pulls the OpenAI key from GBrain config — no manual key management:

OPENAI_API_KEY=$("$GBRAIN_BIN" config get openai_api_key 2>/dev/null)
export OPENAI_API_KEY

Part 4: Backfill History

Before setting up the nightly cron, backfill everything. backfill_distill.sh loops over a date range, processes each day, and runs gbrain embed –stale once at the end.

bash ~/.hermes/scripts/backfill_distill.sh --days 21
── 2026-05-09 ──────────────────────────────
✅ Collected 9 sessions
✅ Distilled → GBrain
── 2026-05-10 ──────────────────────────────
✅ Collected 7 sessions
✅ Distilled → GBrain
════════════════════════════════════════
Backfill complete
Days processed : 22
Days skipped : 1 (no conversations)
Total sessions : 130
Failed : 0
════════════════════════════════════════
▶ Running gbrain embed --stale ...
Embedded 18 chunks across 14 pages
✅ Embeddings updated

Gotcha: If the backfill gets interrupted (Ctrl+C), just re-run. Each day is idempotent — gbrain put overwrites existing pages cleanly.

Part 5: Install the Nightly Cron

crontab -e

0 23 * * * /home/ubuntu/.hermes/scripts/nightly_distill.sh >> /home/ubuntu/.hermes/distill/logs/nightly.log 2>&1

Part 6: The Remember Skill

Check it ran the next morning:

cat ~/.hermes/distill/logs/nightly.log

For things too important to wait until 11pm, add an instant-save skill. Send Remember: from Slack or WhatsApp:

Remember: We decided to use HIFO cost basis for the CA tool
Remember: Sam's number is +1-732xxx — call him about the lease

remember.py
writes the note directly to GBrain and embeds it immediately — searchable within seconds.

~/.hermes/skills/productivity/remember/scripts/remember.py

Register skill.md in the same directory and Hermes picks it up automatically via skill scanning.

Verify It Works

/home/ubuntu/.bun/bin/gbrain query "Toronto trip"
/home/ubuntu/.bun/bin/gbrain query "AI automation"
/home/ubuntu/.bun/bin/gbrain query "WhatsApp morning brief"

Top result for each should be the relevant daily page with score > 0.75.

Cost

Component                                     Monthly
CostGPT-4o distillation (~10k tokens/day)    ~$0.75/mo
OpenAI embeddings (text-embedding-3-large)   ~$0.10/mo
Total                                        < $1/mo

What’s Now in GBrain

Every night:

  • All Slack + WhatsApp conversations → structured knowledge
  • Decisions, action items, observations indexed and searchable
  • Entity pages for real people and companies, updated with timeline entries
  • Remember: available for instant saves anytime

Hermes wakes up each morning knowing what you decided, planned, and discussed the day before. The brain compounds.


Resources:

This is an article in the series Hermes + Gbrain. You can find earlier posts here:

  • Slack & Whatsapp distillations in Hermes + Gbrain: Github Source Code
  • Hermes + GBrain: Complete Setup Guide
  • Hermes + Gbrain + Notion: Complete Setup Guide

Thanks to @Teknium and the @NousResearch team for building Hermes, and @garrytan for open-sourcing GBrain.

That’s all folks!

If you have suggestions to improve this further, please comment. Would love to learn. Please Like, Repost so other Hermes + GBrain users can benefit.

Please Follow to learn with me how to Build the World’s Best Personal AI Assistant for yourself.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
  • More
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Reddit (Opens in new window) Reddit
  • Share on Pinterest (Opens in new window) Pinterest
  • Email a link to a friend (Opens in new window) Email
  • Share on Threads (Opens in new window) Threads

Like this:

Like Loading…

Related

Category: Artificial Intelligence

Leave a ReplyCancel reply

  • Hermes-Gbrain-Slack-Whatsapp
    Save Slack and Whatsapp Conversations into Hermes + Gbrain Longterm Memory
    You converse with your Hermes Agent via Whatsapp & Slack....
  • Hermes Notion Gbrain
    Hermes + Notion + GBrain: A Complete Setup Guide
    This post is a continuation in a series detailing how...
  • Hermes GBrain
    Hermes + GBrain: A Complete Setup Guide
    How I set up a self-improving, always-on personal knowledge base...
  • Hermes AI Agent on AWS EC2
    Hermes AI Agent on AWS EC2
    A practical guide to running Hermes as an always-on service on AWS EC2 Ubuntu, with the architecture, setup sequence, and failure modes that matter most.
  • Hermes-ai-agent
    Hermes AI Agent Setup on AWS VPS
    Background: I gave up after 2-weeks, trying to set up...
  • Size Is Not a Moat
    In a marketplace, size is not moat. There are 3 phases that every marketplace goes through. Before Allee threshold, Cross Escape Velocity and Above Escape velocity.
  • Product Manager
    Good Product Managers Make Good Leaders
    Great product managers don’t just execute—they inspire, influence, and drive meaningful outcomes with limited resources. The very skills that make a product manager successful are also great leadership skills.
  • AI disruption curve
    Disruptive innovation curve of AI
    What is disruptive innovation curve? In 1995, Harvard Business School...
© 2026 escVelocity | Powered by Minimalist Blog WordPress Theme
%d