Back to Dashboard

What This Is (And Isn't)

Rigorous stock analysis powered by multiple AI models, designed to help you think better about investing — not to tell you what to buy.

Lead Analyst

Matt Runchey

Founder & Lead Analyst

Software engineer with a background in building data-intensive systems. Built Runchey Research to demonstrate that structured multi-model AI analysis can surface insights that single-prompt approaches miss — and to document the methodology transparently. Available for select custom analysis commissions and editorial consulting.

Every analysis on this site goes through human editorial review. The models are tools in the process — they do not have final editorial authority.

Editorial Integrity & Disclosure Policy →

The Problem We're Solving

Half-Baked LLM Responses

Ask ChatGPT "should I buy this stock?" and you'll get a confident-sounding answer that might be completely hallucinated. LLMs are powerful but need guardrails.

Meme-able WSB Posts

"YOLO" culture is entertaining but dangerous. When your investment thesis fits in a tweet, you're probably missing something important.

Subscription Mill Newsletters

Too many "experts" are primarily interested in selling you a $49/month subscription, not in whether their picks actually perform.

Our Approach

13-Persona Ensemble

Specialized personas — analysts, critics, synthesizers, moderators — each with a distinct role. Where they converge, we have higher confidence. Where they disagree, we preserve the nuance.

Hallucination Risk Reduction

Hundreds of hours of prompt iteration to encourage structured outputs, mandatory calculations, and explicit uncertainty when data isn't verified.

Transparent Verification

Analyses include a "What We Checked" section showing what was manually verified vs. LLM-only — so you know where to dig deeper.

Honest Classifications

We don't issue "buy" or "sell" calls. We classify stocks based on how the evidence stacks up — including "Value Trap" when low valuations are justified, not mispriced.

LLMs as Tools to Hone

Public discourse about AI tends to squish its dynamic range. The conversation often focuses on costs — environmental impact, concentration risk, expense — while benefits are either straw-manned ("yes but it hallucinates") or misunderstood (many people only see ChatGPT as a toy that makes funny mistakes).

This project aims to demonstrate something different. With careful prompt engineering, structured validation, and multi-model cross-checking, LLMs become force multipliers for research — not magic oracles, not threats, but tools to hone.

We're learning as we build. Our understanding evolves with direct experience, not static narratives about what AI can or can't do. Hands-on work is the antidote to having your mind made up based on the words of others.

The Analysis Ensemble

We've evolved from "use multiple LLM vendors" to designing specialized personas that each play a distinct role in the analysis process. The underlying model matters less than the role it's assigned. Our persona ensemble includes analysts, critics, synthesizers, and moderators — each with specific jobs and cognitive styles.

Key Phases

Data Pipeline

Prep Agent gathers raw documents. Archivist distills them into structured dossiers. No analysis yet — just extraction.

Analysis

Analyst executes each lens independently using reasoning-heavy models. One lens at a time, no cross-contamination.

Discourse

Bullet Hole Producer finds flaws. Analyst Response defends or revises.Synthesizer combines perspectives. This is where debate happens.

Convergence

Moderator detects if positions are stabilizing or oscillating.Voice of Reason intervenes when needed. We seek convergence, not forced consensus.

Model-Agnostic Design

Our framework works across vendors. "Reasoning model" could be Claude, GPT-4, or Gemini Pro. "Fast model" could be Haiku, GPT-4o-mini, or Gemini Flash. We match cognitive style to task — not brand to role.Learn more about the ensemble →

What You Get vs. What You Don't

What You Get

  • • Deep-dive analyses with clear reasoning you can learn from
  • • Multi-model debates that expose blind spots
  • • Structured metrics (Treadmill Test, FCF analysis, etc.)
  • • Transparent methodology — see how conclusions are reached
  • • Educational content on value investing concepts

What You Don't Get

  • • "Buy" or "sell" recommendations
  • • Entry/exit price signals
  • • Portfolio construction advice
  • • Personalized financial guidance
  • • Trading signals or alerts
  • • Guarantees of any kind

Frequently Asked Questions

Ready to Dig Deeper?

Explore our analyses to see how multiple AI models debate whether beaten-down stocks are hidden gems or value traps.

View Analyses