<RETURN_TO_BASE

Building Self-Organizing Zettelkasten Knowledge Graphs

Explore cutting-edge agentic AI with a Zettelkasten memory system in this tutorial.

Overview

In this tutorial, we dive into the cutting edge of Agentic AI by building a “Zettelkasten” memory system, a “living” architecture that organizes information much like the human brain. We move beyond standard retrieval methods to construct a dynamic knowledge graph where an agent autonomously decomposes inputs into atomic facts, links them semantically, and even “sleeps” to consolidate memories into higher-order insights.

Using Google’s Gemini, we implement a robust solution that addresses real-world API constraints, ensuring our agent stores data and also actively understands the evolving context of our projects.

Essential Libraries

We begin by importing essential libraries for graph management and AI model interaction, while also securing our API key input. Crucially, we define a robust retry_with_backoff function that automatically handles rate limit errors, ensuring our agent gracefully pauses and recovers when the API quota is exceeded during heavy processing.

!pip install -q -U google-generativeai networkx pyvis scikit-learn numpy
 
import os
import json
import uuid
import time
from dataclasses import dataclass, field
from typing import List
from sklearn.metrics.pairwise import cosine_similarity
import google.generativeai as genai

MemoryNode Structure

We define the fundamental MemoryNode structure to hold our content, types, and vector embeddings in an organized data class. We then initialize the main RobustZettelkasten class, establishing the network graph and configuring the Gemini embedding model that serves as the backbone of our semantic search capabilities.

@dataclass
class MemoryNode:
    id: str
    content: str
    type: str
    embedding: List[float] = field(default_factory=list)
    timestamp: int = 0
 
class RobustZettelkasten:
    def __init__(self):
        self.graph = nx.Graph()
        self.model = genai.GenerativeModel(MODEL_NAME)

Information Ingestion

We construct an ingestion pipeline that decomposes complex user inputs into atomic facts to prevent information loss. We immediately embed these facts and use our agent to identify and create semantic links to existing nodes, effectively building a knowledge graph in real time that mimics associative memory.

def add_memory(self, user_input):
    self.step_counter += 1
    facts = self._atomize_input(user_input)
    for fact in facts:
        emb = self._get_embedding(fact)
        candidates = self._find_similar_nodes(emb)

Memory Consolidation

We implement the cognitive functions of our agent, enabling it to “sleep” and consolidate dense memory clusters into higher-order insights. We also define the query logic that traverses these connected paths, allowing the agent to reason across multiple hops in the graph to answer complex questions.

def consolidate_memory(self):
    high_degree_nodes = [n for n, d in self.graph.degree() if d >= 2]
    for main_node in high_degree_nodes:
        cluster_content = [self.graph.nodes[n]['data'].content for n in cluster_ids]

Visualization

We wrap up by adding a visualization method that generates an interactive HTML graph of our agent’s memory, allowing us to inspect the nodes and edges.

def show_graph(self):
    net = Network(notebook=True)
    for n, data in self.graph.nodes(data=True):
        net.add_node(n, label=data.get('label', ''))

Conclusion

By enabling our agent to actively link related concepts and reflect during a “consolidation” phase, we address the critical problem of fragmented context in long-running AI interactions. This system marks the way for us to build more capable autonomous agents.

🇷🇺

Сменить язык

Читать эту статью на русском

Переключить на Русский