Architecture
MemryLab follows a strict hexagonal (ports & adapters) architecture. The core domain has zero knowledge of specific databases, LLM providers, or UI frameworks.
System Overview
React 19 Frontend
TimelineActivitySearchAsk/ChatInsightsEvolutionImportMemoryEntitiesGraphLogsSettings
Tauri IPC — 42 Commands + Events
Domain (Pure Logic)
Models
DocumentChunkEntityRelationshipThemeMemoryInsightNarrativeContradictionSentiment
Ports (Interfaces)
IDocumentStoreIVectorStoreIGraphStoreILlmProviderIEmbeddingProviderIMemoryStoreIPageIndexITimelineStore
SQLite Adapters
7 stores + FTS5
Ollama
Local LLM + Embeddings
OpenAI-Compat
8 cloud providers
Claude API
Anthropic adapter
Pipelines
Ingestion Pipeline
Detect→Parse→Dedup→Normalize→Chunk→Embed→Store→Index
Analysis Pipeline (8 stages)
Themes→Sentiment→Beliefs→Entities→Insights→Contradictions→Evolution→Narratives
RAG Query Pipeline
Classify→Retrieve→RRF Fuse→Memory Augment→LLM Generate→Citations
Tech Stack
App ShellTauri 2.0 (Rust + WebView2)
BackendRust with async-trait, tokio
FrontendReact 19, TypeScript, Vite 8
StylingTailwind CSS 4, Lucide Icons
VisualizationD3.js v7 (timeline, graph)
DatabaseSQLite (WAL mode) + FTS5
Vector StoreSQLite (cosine similarity)
Loggingtracing + tracing-appender (daily rotation)
SecurityOS Keychain (keyring crate)
BuildTauri CLI, NSIS/MSI/DMG/AppImage
Data Flow
- Import: User drops a file/folder/ZIP. The system auto-detects the source using confidence scoring across 30+ adapters. A generic sweep catches remaining files.
- Parse: Platform-specific adapter extracts text, timestamps, participants, and metadata. ZIP files are extracted transparently.
- Process: Documents are deduplicated (SHA-256), normalized (Unicode NFC), chunked (512 tokens, paragraph-aware), and embedded via the configured AI provider.
- Analyze: The 8-stage analysis pipeline extracts themes, sentiment, beliefs, entities, insights, contradictions, and generates narratives — all via LLM.
- Query: Hybrid search combines FTS5 keyword + vector similarity via Reciprocal Rank Fusion. RAG augments with memory facts before LLM generation.
- Log: Every action (import, analysis, search, config change) is logged to the activity history with full results and timing.
Directory Structure
MemryLab/
├── src/ # React frontend
│ ├── components/
│ │ ├── timeline/ # D3 zoomable timeline
│ │ ├── graph/ # D3 force-directed graph
│ │ ├── ask/ # RAG chat with history
│ │ ├── import/ # 30+ source import wizard
│ │ ├── activity/ # Activity history feed
│ │ ├── logs/ # Application log viewer
│ │ ├── onboarding/ # First-run wizard
│ │ └── settings/ # Provider config + about
│ ├── stores/ # Zustand state management
│ └── lib/tauri.ts # Type-safe IPC bindings (42 commands)
├── src-tauri/src/
│ ├── domain/
│ │ ├── models/ # Document, Entity, Theme, Memory, etc.
│ │ └── ports/ # 9 trait interfaces
│ ├── adapters/
│ │ ├── sqlite/ # 7 SQLite stores + migrations
│ │ ├── llm/ # Ollama, Claude, OpenAI-compat, usage logger
│ │ └── keychain.rs # OS credential store
│ ├── pipeline/
│ │ ├── ingestion/ # 30+ source adapters + orchestrator
│ │ ├── analysis/ # 8 analysis stages
│ │ └── pii_detector.rs # PII regex scanner
│ ├── query/ # RAG pipeline with RRF fusion
│ ├── prompts/ # Versioned prompt templates
│ └── commands/ # 42 Tauri command handlers
├── website/ # Next.js marketing site
└── docs/ # Design documentsDesign Principles
- Privacy by default: All processing is local. Network calls only go to the user's chosen LLM provider with minimum context.
- Hexagonal architecture: Domain models have zero dependencies on infrastructure. Ports define interfaces; adapters implement them.
- Two-pass exploratory import: Platform adapter runs first, then a generic sweep catches all remaining text files. No file left behind.
- Provider-agnostic LLM: One trait interface, 9 providers. Switching from Gemini to Groq is a single click.
- Tiny binary: 4.7MB installer. Tauri 2.0 uses the system WebView — no bundled Chromium.
- Graceful degradation: If embeddings fail, search still works via FTS5. If LLM is offline, import still succeeds.