Back to Dashboard

What This Is (And Isn't)

Rigorous stock analysis powered by multiple AI models, designed to help you think better about investing — not to tell you what to buy.

The Problem We're Solving

Half-Baked LLM Responses

Ask ChatGPT "should I buy this stock?" and you'll get a confident-sounding answer that might be completely hallucinated. LLMs are powerful but need guardrails.

Meme-able WSB Posts

"YOLO" culture is entertaining but dangerous. When your investment thesis fits in a tweet, you're probably missing something important.

Subscription Mill Newsletters

Too many "experts" are primarily interested in selling you a $49/month subscription, not in whether their picks actually perform.

Our Approach

Multi-Model Consensus

Every analysis runs through at least 4 different AI models. Where they agree, we have higher confidence. Where they disagree, we show you the debate.

Hallucination Risk Reduction

Hundreds of hours of prompt iteration to encourage structured outputs, mandatory calculations, and explicit uncertainty when data isn't verified.

Human Verification Layer

Analyses include a "Human Verification Status" showing what's been manually confirmed (SEC filings, physics, etc.) vs. LLM-only.

Honest Classifications

We don't issue "buy" or "sell" calls. We classify stocks based on how the evidence stacks up — including "Value Trap" when low valuations are justified, not mispriced.

LLMs as Tools to Hone

Public discourse about AI tends to squish its dynamic range. The conversation often focuses on costs — environmental impact, concentration risk, expense — while benefits are either straw-manned ("yes but it hallucinates") or misunderstood (many people only see ChatGPT as a toy that makes funny mistakes).

This project aims to demonstrate something different. With careful prompt engineering, structured validation, and multi-model cross-checking, LLMs become force multipliers for research — not magic oracles, not threats, but tools to hone.

We're learning as we build. Our understanding evolves with direct experience, not static narratives about what AI can or can't do. Hands-on work is the antidote to having your mind made up based on the words of others.

How We Pick Our Models

Our methodology isn't locked to any specific set of models. We currently use at least 4 models per analysis, selected for diversity of perspective and accessibility. As the AI landscape evolves — with models like Llama, Grok, DeepSeek, and others maturing — we may add more viewpoints. The core principle: multiple independent perspectives reduce blind spots.

Current Model Lineup

Claude (Anthropic)

Frontier reasoning model

Known for careful, nuanced analysis and willingness to express uncertainty. Often catches edge cases others miss. Strong at structured thinking and following complex multi-stage prompts.

GPT (OpenAI)

Frontier reasoning model

Broad knowledge base and strong general reasoning. Good at synthesizing diverse information sources. Often provides the most "conventional" Wall Street-style analysis — useful as a baseline.

Gemini (Google)

Frontier reasoning model

Often takes contrarian positions and challenges assumptions. Strong at identifying structural issues and second-order effects. Tends to be more aggressive in its conclusions — a useful stress test.

Perplexity

Search-augmented model

Not a "frontier" reasoning model in the traditional sense — Perplexity excels at real-time web research and source citation. We use it to surface recent news, filings, and data that other models might miss due to training cutoffs.

Future Additions

We're watching developments in open-source models (Llama, Mixtral), Chinese models (DeepSeek, Qwen), and specialized financial models. As these mature and become accessible, we may add them to our lineup — or use "fast" models for more automated screening. The goal is diversity of perspective, not brand loyalty.

What You Get vs. What You Don't

What You Get

  • • Deep-dive analysis frameworks you can learn from
  • • Multi-model debates that expose blind spots
  • • Structured metrics (Treadmill Test, FCF analysis, etc.)
  • • Transparent methodology — see how conclusions are reached
  • • Educational content on value investing concepts

What You Don't Get

  • • "Buy" or "sell" recommendations
  • • Price targets to trade on
  • • Portfolio construction advice
  • • Personalized financial guidance
  • • Trading signals or alerts
  • • Guarantees of any kind

Frequently Asked Questions

Ready to Hunt Roadkill?

Explore our analyses to see how multiple AI models debate whether beaten-down stocks are hidden gems or value traps.

View Analyses