Building mindfry: A Cognitive Memory Layer for AI Agents
Memories that decay, associate, and dream — just like human cognition

TL;DR
I built mindfry — a cognitive memory layer for AI agents inspired by how human consciousness works. Memories decay over time, automatically associate with each other, and transition between conscious/subconscious states. Built for LLM agents, game AI, and any system that needs memory that thinks.
Why AI Agents Need Better Memory
Most AI agent memory is just a list:
const memory = []
memory.push({ role: 'user', content: '...' })
memory.push({ role: 'assistant', content: '...' })
// Forever growing, never forgetting
This creates problems:
Context overflow: LLMs have token limits
No prioritization: Old irrelevant memories equal to recent crucial ones
No association: Related memories don't activate each other
Manual management: You decide what to forget, when
But that's not how memory works.
Human memory is dynamic:
Memories fade over time
Frequently accessed memories stay vivid
Related memories prime each other
There's a natural threshold between conscious recall and subconscious storage
I built mindfry to give AI agents this kind of memory.
The Consciousness Model
mindfry models memory as a graph with energy dynamics:
Every memory has:
Energy: How "active" it is (0.0 to 1.0)
Threshold: The line between conscious and subconscious
Decay Rate: How fast energy fades over time
Bonds: Weighted connections to other memories
Use Case: LLM Agent Memory
Imagine an AI assistant that remembers conversations:
import { createPsyche } from 'mindfry'
const agentMemory = createPsyche<{ text: string; importance: number }>({
defaultThreshold: 0.3,
defaultDecayRate: 0.0001, // ~2 hour half-life
autoAssociate: true
})
// User mentions they're a vegetarian
agentMemory.remember('user-diet', {
text: 'User is vegetarian',
importance: 0.9
}, 1.0)
// Later, user asks for restaurant recommendations
agentMemory.stimulate('user-diet', 0.3) // Boost relevant memory
// Get conscious memories for context
const context = agentMemory.getConscious()
.map(m => m.content.text)
.join('\n')
mindfry doesn’t decide what goes into the prompt — it decides what is worth remembering
The agent naturally:
Remembers important facts longer (higher initial energy)
Forgets small talk faster (low energy, fast decay)
Associates related memories (priming)
Keeps context window manageable (subconscious filtered out)
Use Case: Game NPC Memory
NPCs that remember player actions:
const npcMemory = createPsyche<NPCMemory>({
defaultThreshold: 0.2,
defaultDecayRate: 0.00001, // Slower decay for NPCs
})
// Player helped the NPC
npcMemory.remember('player-helped', {
type: 'favor',
description: 'Player saved me from bandits',
emotion: 'grateful'
}, 1.0)
// Player stole from the NPC
npcMemory.remember('player-stole', {
type: 'betrayal',
description: 'Player took my sword',
emotion: 'angry'
}, 0.8)
// Time passes... memories decay differently
// When player returns:
const memories = npcMemory.getConscious()
// NPC's reaction based on what they still remember
The Key Innovation: Lazy Decay
Traditional approaches burn CPU:
// ❌ BAD: Clock-driven decay
setInterval(() => {
for (const memory of allMemories) {
memory.energy *= Math.exp(-rate * dt)
}
}, 100) // CPU spinning even when idle
mindfry computes energy only when accessed:
// ✅ GOOD: Lazy evaluation
getEnergy(index: number): number {
const elapsed = this.clock() - this.lastAccess[index]
return this.baseEnergy[index] * decayLUT[elapsed][rate]
}
Zero idle CPU. Energy only matters when you ask for it.
Priming: Memories Activate Each Other
When you remember something, related memories light up:
// Remember "coffee"
psyche.remember('coffee', { text: 'Morning coffee' })
// Auto-bonds to conscious memories like "morning", "routine"
// Stimulate "coffee"
psyche.stimulate('coffee', 0.3)
// Energy propagates to "morning", "routine" through bonds
This mimics how human recall works — one memory triggers associated memories.
The Mythological Architecture
Each layer has a mythological name:
| Layer | Name | Role |
| Consciousness | Psyche 🦋 | Memory container |
| Maintenance | Morpheus 💤 | Background cleanup |
| Persistence | AkashicRecords 📜 | Cold storage |
Psyche (The Soul)
Main API. Remembers, stimulates, recalls.
const psyche = createPsyche()
psyche.remember(id, content, energy)
psyche.stimulate(id, energyDelta)
psyche.recall(id, maxDepth) // Traverse graph
Morpheus (God of Dreams)
Runs when the system is idle. Prunes dead bonds, transfers faded memories to archive.
morpheus.notify('idle') // Hint: system is calm
// Morpheus decides what to clean up
AkashicRecords (Eternal Memory)
Cold storage for archived memories. Persists with access score decay.
await akashic.inscribe(id, payload, energy, ...)
await akashic.retrieve(id) // Reincarnate
Performance
| Metric | Value |
| Memory per node | 4 bytes |
| Idle CPU | 0% |
| Bundle (ESM) | ~25 KB |
| Dependencies | 0 |
Built with Uint8Array for 25x memory reduction vs object-based storage.
Performance is achieved by deferring work until observation time — not by precomputation.
Try It
npm install mindfry
import { createPsyche } from 'mindfry'
const memory = createPsyche()
memory.remember('fact', { text: 'User likes TypeScript' }, 1.0)
// Time passes... energy decays
console.log(memory.get('fact')?.energy) // 0.67
// Stimulate to reinforce
memory.stimulate('fact', 0.3)
What's Next
v0.4.0: Full Morpheus → Psyche → AkashicRecords integration
v0.5.0: Perception layer (reactive observation)
v0.6.0: Semantic similarity bonds (embedding-based)
\ experimental*
The goal: a foundational cognitive memory layer for agent architectures
Links:
Give it a ⭐ if you build something interesting with it!






