We didn't plan to analyze 32 companies in under two months. But each analysis surfaced questions that pointed to the next one — and after 2,234 lens reports, patterns started emerging that no single analysis could reveal.
This post is different from our usual equity deep dives. It steps back from individual companies to survey what we've built, what surprised us, and what the multi-LLM process taught us about itself.
The raw numbers are interesting but they're not the story. The story is what happened between analyses — the cross-company patterns and the process lessons that only became visible after enough data accumulated.
Five Patterns That Emerged Across 32 Companies
None of these were hypotheses we started with. They emerged from reading synthesis after synthesis and noticing the same tensions appearing in unrelated companies, across different industries, flagged by different lenses.
The Insider-Narrative Divergence
When management says one thing and insiders do another. We found this pattern in roughly a third of companies analyzed — not one anomaly, but a structural phenomenon.
The Insider Investigator lens was designed to catch individual cases. What we didn't expect was how often the pattern would repeat — or how consistently the insiders' actions would contradict their own public statements.
The Inverted Market Narrative
Sometimes the market isn't just wrong in degree — it's wrong in direction. The Myth Meter lens found INVERTED or DIVERGING narrative-reality gaps in over half of companies analyzed.
This is the most actionable pattern we found. When the market narrative runs opposite to operational reality, something has to give. The question is always timing.
The Governance-Operations Disconnect
Companies can simultaneously build defensible businesses AND extract value through governance. This duality was the hardest finding to hold in tension — because both sides are genuinely true.
The natural instinct is to resolve the tension — either the operations prove governance is fine, or governance concerns invalidate the operations. Our analyses consistently found that both can be true at once. That discomfort is the signal.
The Unobservable Fulcrum
The most important variable in an analysis is often the least observable — and models converge most strongly around it, which is itself a warning sign.
When every lens converges on the same unobservable variable, be cautious. Perfect agreement on something nobody can verify may reflect shared assumptions rather than independent analysis.
Regulatory Convergence as Systemic Risk
Not idiosyncratic regulatory risk hitting one company — but concurrent, multi-front regulatory action hitting entire categories at once.
We analyzed Visa and Mastercard independently, weeks apart. Both flagged the same regulatory convergence. That wasn't a hypothesis — it was an emergent finding from the process itself.
Three Things the Process Taught Us
Beyond the equity findings, running 2,234 lens reports through a structured debate process revealed things about the methodology itself.
Unanimous Consensus Is Suspicious
GitLab: 5 lenses reached perfect agreement with zero Voice of Reason interventions and zero minority positions. The system flagged its own unanimity — noting that this was "either genuine clarity or a consensus blindspot that would benefit from additional challenge pressure."
Beyond Meat: all signals across all lenses converged unanimously negative — zero favorable findings. Statistically unusual and analytically significant. When every lens agrees, either the evidence is overwhelming or the framing produced artificial convergence.
We still don't know which it is in either case. That honest uncertainty is the point.
The Value Is in Cross-Lens Interference
No single lens found the Beyond Meat distress spiral — the pattern where each financial rescue action simultaneously worsens the capital structure for the next crisis. Revenue decline leads to capital raises at distressed pricing, which causes massive dilution, which drives institutional disengagement. Only by running multiple lenses and reading across them did the self-reinforcing loop become visible.
No single lens found the Walmart GLP-1 risk. The Moat Mapper found DOMINANT competitive positioning. The Gravy Gauge found DURABLE revenue. It was the Black Swan Beacon — designed specifically for tail risks and consensus blindspots — that flagged GLP-1 weight-loss drugs as a potential headwind to grocery spending that every prior lens had missed.
The value isn't in any individual lens. It's in the interference pattern between them.
Prediction Calibration Takes Years, Not Weeks
Early results from our prediction ensemble are encouraging. The ASTS dilutive capital raise was predicted at 70% probability and resolved YES within 11 days — $1.7B in raises hit almost immediately (Brier score: 0.09). The CVNA OCF conversion ratio was predicted at 18% failure probability and correctly resolved NO (Brier score: 0.03).
But 16 resolved markets out of 255 is a 6% sample. We're publishing these results because transparency matters, not because we've proven calibration. Genuine calibration requires hundreds of resolutions across different market conditions and time horizons. We're building toward that — but claiming it now would be exactly the kind of premature confidence we warn against in our analyses.
What We Got Wrong
Honest accounting matters more than highlight reels.
The Reddit Traffic Thesis
Our initial analysis flagged a 55% decline in Reddit's Google traffic as a structural concern. Q4 2025 earnings challenged this: DAUq grew 19% to 121.4M, revenue beat by 9.2%. The traffic-decline signal may have been measuring a platform shift (users going direct) rather than a usage decline. We adjusted our assessment but the initial framing was too confident.
Early Analyses Were Too Shallow
Our earliest runs (Eli Lilly, Novo Nordisk, Disney) used 3-4 lenses when they deserved 6-8. The system improved as we learned which lens combinations produce the richest interference patterns. Later analyses like CVNA (8 lenses), LMND (8 lenses), and V (8 lenses) reflect that learning. We'd run several early analyses differently today.
The Intuit Insider Tension Remains Unresolved
We identified the Intuit narrative as INVERTED — the market is wrong in direction on AI disruption. But we couldn't resolve why insiders sold $375M with zero purchases if the business is truly undervalued. The honest answer is we don't know, and saying "the narrative is inverted" while insiders disagree with their wallets is a tension we haven't fully earned the right to resolve.
What's Next
Not a roadmap — a direction.
We're trending toward 7-8 lenses per company, up from the early 3-4. The interference patterns get richer with more lenses, but there are diminishing returns — we're still learning where the optimal depth sits.
The prediction ensemble has 239 active markets waiting for resolution. As those resolve over the coming months, we'll have a real calibration dataset — not 3 data points but potentially 50-100. That's when we'll be able to say something meaningful about prediction accuracy.
And we'll keep publishing the mistakes alongside the successes. The platform's value isn't in being right — it's in being transparent about how we think, where we're uncertain, and what we're learning.
See the analyses behind these patterns
All 32 companies analyzed with full lens reports, model disagreements, and signal classifications.
Related Reading
Auditing the Auditors
What we learned running 4 LLMs through a structured debate — the behavioral patterns that showed up consistently.
Why Personas Matter More Than Models
The 13 personas that orchestrate our analysis process — and why role design matters more than model selection.
How We Assess Price vs. Value
Probability-weighted stress-testing of the market's implicit assumptions — without a price target.