Back to Blog
MethodologyAnalysis Ensemble

Why Personas Matter More Than Models

December 28, 2025 · 8 min read · Updated January 2026

When we started this project, we thought the key insight was "use multiple LLM vendors." We were wrong. The real insight is which roles you assign, not which models you use.

Our analysis ensemble uses 13 specialized personas — analysts, critics, synthesizers, moderators — each with a specific job in the discourse. The underlying model matters less than how you frame each role and when you deploy it.

From Models to Personas

Our original approach: run the same prompt through Claude, ChatGPT, Gemini, and Perplexity. Compare outputs. Look for disagreements. This works, but it misses the deeper insight.

The problem wasn't which vendor we used. It was that we were asking all of them to do the same job. A model playing critic will find different things than a model playing analyst — even if it's the exact same model.

The Key Insight
Personas are prompt framings that give the same LLM different roles in the discourse. Instead of 4 different models with the same job, we now use 1-2 models with 13 different personas injected as needed.

The 13 Personas

Our analysis ensemble has specialized roles for each phase:

Data Pipeline

Prep Agent gathers raw documents. Archivist distills them into dense dossiers. No analysis yet — just structured extraction.

Analysis

Analyst executes each lens independently. This is where the actual methodology gets applied — one lens at a time, no cross-contamination.

Discourse

Bullet Hole Producer finds flaws and gaps. Analyst Response defends or revises. Synthesizer combines outputs. This is where debate happens.

Quality Control

Auditor checks completeness — did we address all required signals? PASS or FAIL, no gray areas.

Convergence

Moderator detects if positions are stabilizing or oscillating.Voice of Reason intervenes when needed. We seek convergence, not forced consensus.

Output

Reporter writes the final lens report. Executive produces cross-lens synthesis after all analyses complete.

See all 13 personas in detail

Cognitive Styles, Not Model Vendors

Different tasks need different thinking styles. We match persona to cognitive style:

Cognitive StyleWhen to UseExample Personas
Fast / LightweightPattern-matching, checklists, binary decisionsAuditor, Moderator, Archivist
Reasoning / HeavyNuanced judgment, edge cases, competing interpretationsAnalyst, Reporter, Executive
Extended ThinkingStep-by-step reasoning, subtle logical issuesVoice of Reason, Steelman
Both in ParallelDiverse perspectives, quick + deep simultaneouslyBullet Hole Producer, Synthesizer

This is model-agnostic. "Reasoning model" could be Claude Opus, GPT-4, or Gemini Pro. "Fast model" could be Claude Haiku, GPT-4o-mini, or Gemini Flash. The framework works regardless of which vendor you choose.

Real Discourse in Action

Here's how personas interact during actual analysis:

Context: Analyzing Criteo's accounting quality — is the revenue recognition aggressive?

Analyst:

Revenue recognition appears appropriate. ASC 606 compliance documented. No restatements.

Bullet Hole Producer:

The gross-to-net transition obscures TAC. Customer concentration not disclosed. Missing segment breakdowns.

Analyst Response:

Valid points on TAC opacity. Revising assessment to CONCERNING. Disclosure gaps warrant skepticism.

Synthesizer:

Consensus: CONCERNING. Core accounting mechanics are compliant, but disclosure quality creates uncertainty about true margin dynamics.

Notice how the Analyst's initial "appears appropriate" evolved through critique. The Bullet Hole Producer didn't change the underlying facts — it surfaced what was missing. The final synthesis acknowledges both the compliance and the gaps.

Context: Round 3 of discourse — positions oscillating between CONCERNING and ALARMING

Moderator:

Pendulum detected. ACCOUNTING_INTEGRITY oscillating: ALARMING → CONCERNING → ALARMING. Score: 0.45. Injecting Voice of Reason.

Voice of Reason:

The swing hinges on one metric: TAC as % of revenue. Both interpretations cite the same data. The difference is materiality threshold, not fact disagreement. Recommend: CONCERNING with explicit uncertainty flag.

This is the pendulum problem. When models critique each other, they over-correct. The Moderator detects this pattern and injects Voice of Reason to stabilize the discourse.

Convergence, Not Consensus
We don't need all personas to agree. We need positions stable for 2 rounds and HIGH-materiality debates resolved. Minority positions are preserved in the final report.

Why This Matters

The old model-centric approach had problems:

  • Vendor lock-in — Analysis quality depended on which companies happened to have the best models this quarter
  • Role confusion — Every model tried to do everything, leading to generic outputs
  • No structure — Disagreements were random, not systematic

The persona-based approach fixes these:

  • Model-agnostic — Swap vendors without changing the methodology
  • Specialized roles — Each persona does one job well
  • Structured discourse — Critique → Response → Synthesis follows a predictable pattern
  • Convergence detection — We know when to stop, not just when to start

The Practical Takeaway

If you're building LLM-based analysis systems:

  1. Design roles, not prompts. What job does each agent have? Who critiques whom?
  2. Match cognitive style to task. Use fast models for checklists, reasoning models for judgment calls.
  3. Build in convergence detection. Don't just run N rounds — detect when positions stabilize.
  4. Preserve dissent. Disagreement is information. Surface it, don't suppress it.

The underlying model is a commodity. The ensemble design is the intellectual property.

This report was generated by the Runchey Research AI Ensemble using primary SEC data and reviewed by Matthew Runchey for accuracy.

This analysis is for educational purposes only and does not constitute investment advice. See our Editorial Integrity & Disclosure Policy and Terms of Service.