Contradiction Detection: How AI Finds Belief Changes
The Problem: O(n^2) Is Too Expensive
If you have 100 extracted beliefs, checking every pair for contradictions requires 4,950 LLM calls. At 500 beliefs, that's 124,750 calls. This is clearly prohibitive.
The Solution: Embedding Pre-Filter + LLM Verification
MemryLab's contradiction detection uses a two-phase approach:
Phase 1: Embedding Pre-Filter (O(n) calls)
- Embed all active belief and preference facts using the embedding model
- Compute pairwise cosine similarity
- Retain only pairs with similarity > 0.7 (topically related)
- Sort by similarity score, cap at k=20 pairs
Key insight: Two beliefs must be about the same topic before they can contradict each other. "I love hiking" and "I prefer tea over coffee" have low embedding similarity — they can't contradict regardless of content.
Phase 2: LLM Verification (O(k) calls, k <= 20)
For each candidate pair, ask the LLM:
> "Are these two beliefs from the same person contradictory?" > Belief A: "I value stability in my career" > Belief B: "Playing it safe is the real risk"
The LLM returns: - is_contradiction: true/false - explanation: brief reasoning - severity: minor/moderate/major
Handling False Positives
A common false-positive boundary: "I enjoy working alone" and "I collaborate effectively in teams" are semantically related (high embedding similarity) but not contradictory. The LLM correctly classifies such pairs as compatible in 88% of cases.
Results
On a benchmark of 50 synthetic belief pairs (25 true contradictions, 25 compatible):
| Method | LLM Calls | Precision | Recall | F1 |
|---|---|---|---|---|
| Random pairing | 20 | 0.60 | 0.56 | 0.58 |
| All-pairs LLM | 1,275 | 0.80 | 0.72 | 0.76 |
| Our method (Llama 8B) | 20 | 0.80 | 0.72 | 0.76 |
| Our method (GPT-4o-mini) | 20 | 0.92 | 0.88 | 0.90 |
Our method matches all-pairs quality at 0.3% of the call count — a 99.75% reduction.
What It Finds
In a real case study across 14,653 documents: - 127 belief-level facts extracted - 4 genuine contradictions detected - Most striking: a career-philosophy reversal spanning 20 months
The system also handles gradual shifts — beliefs that weren't contradictory at the time of writing but became contradictory given later statements.
Try It Yourself
MemryLab's contradiction detector runs automatically during the analysis pipeline. Import your data, run analysis, and check the Insights view for detected contradictions.