Blog/Contradiction Detection: How AI Finds Belief Changes You Never Noticed
Tushar Laad··10 min readtechnicalAI

Contradiction Detection: How AI Finds Belief Changes

The Problem: O(n^2) Is Too Expensive

If you have 100 extracted beliefs, checking every pair for contradictions requires 4,950 LLM calls. At 500 beliefs, that's 124,750 calls. This is clearly prohibitive.

The Solution: Embedding Pre-Filter + LLM Verification

MemryLab's contradiction detection uses a two-phase approach:

Phase 1: Embedding Pre-Filter (O(n) calls)

  1. Embed all active belief and preference facts using the embedding model
  2. Compute pairwise cosine similarity
  3. Retain only pairs with similarity > 0.7 (topically related)
  4. Sort by similarity score, cap at k=20 pairs

Key insight: Two beliefs must be about the same topic before they can contradict each other. "I love hiking" and "I prefer tea over coffee" have low embedding similarity — they can't contradict regardless of content.

Phase 2: LLM Verification (O(k) calls, k <= 20)

For each candidate pair, ask the LLM:

> "Are these two beliefs from the same person contradictory?" > Belief A: "I value stability in my career" > Belief B: "Playing it safe is the real risk"

The LLM returns: - is_contradiction: true/false - explanation: brief reasoning - severity: minor/moderate/major

Handling False Positives

A common false-positive boundary: "I enjoy working alone" and "I collaborate effectively in teams" are semantically related (high embedding similarity) but not contradictory. The LLM correctly classifies such pairs as compatible in 88% of cases.

Results

On a benchmark of 50 synthetic belief pairs (25 true contradictions, 25 compatible):

MethodLLM CallsPrecisionRecallF1
Random pairing200.600.560.58
All-pairs LLM1,2750.800.720.76
Our method (Llama 8B)200.800.720.76
Our method (GPT-4o-mini)200.920.880.90

Our method matches all-pairs quality at 0.3% of the call count — a 99.75% reduction.

What It Finds

In a real case study across 14,653 documents: - 127 belief-level facts extracted - 4 genuine contradictions detected - Most striking: a career-philosophy reversal spanning 20 months

The system also handles gradual shifts — beliefs that weren't contradictory at the time of writing but became contradictory given later statements.

Try It Yourself

MemryLab's contradiction detector runs automatically during the analysis pipeline. Import your data, run analysis, and check the Insights view for detected contradictions.

Download MemryLab | Read the Paper

Ready to explore your data?

Download MemryLab — free, open source, privacy-first.

Download MemryLab