{"id":"ZwlDJzhnHT3SDGPDYYAD","title":"How LLMs Are Changing Fundamental Research","slug":"how-llm-are-changing-fundamental-research","content":"<p>When most people hear &ldquo;AI in investing,&rdquo; they picture high-frequency trading algorithms executing thousands of trades per millisecond, or quantitative hedge funds mining satellite imagery for edge. That&rsquo;s the version of the story that gets covered in the press, and it&rsquo;s real. But it&rsquo;s also incomplete, and the incompleteness matters.</p>\n\n<p>There is a second, quieter transformation underway. It&rsquo;s happening not in the execution layer of markets &mdash; the speed-driven, latency-optimized world of electronic trading &mdash; but in the research layer. Specifically, in the way fundamental investors gather, process, and synthesize the qualitative information that drives long-term investment decisions.</p>\n\n<p>This shift is less dramatic than algorithmic trading. It doesn&rsquo;t involve co-located servers or nanosecond advantages. But for investors who care about business quality over multi-year horizons, it may ultimately prove more consequential.</p>\n\n<h2 style=\"font-size: 26px; margin-top: 48px; margin-bottom: 16px; color: #1a1a2e;\">Two Very Different Problems</h2>\n\n<p>To understand why LLMs matter for fundamental research, you first have to understand how fundamentally different the two uses of AI in investing really are.</p>\n\n<p>Quantitative trading uses AI to find statistical patterns in structured data &mdash; prices, volumes, order flow, factor exposures &mdash; and to act on those patterns faster than humans can. The competitive advantage comes from speed, scale, and mathematical sophistication. The data is numerical. The signals are statistical. The holding periods are often measured in seconds or days.</p>\n\n<p>Fundamental research is an entirely different problem. The goal is to understand a business deeply enough to assess whether its current price reflects its long-term value. The relevant data is mostly unstructured: earnings call transcripts, annual reports, customer reviews, employee sentiment, patent filings, industry publications, competitor analysis. The signals are qualitative. The holding periods are measured in years.</p>\n\n<p>For decades, these two worlds have existed in parallel with surprisingly little overlap. Quants built increasingly sophisticated models for the first problem. Fundamental analysts continued to read 10-Ks, build spreadsheet models, and talk to industry contacts the way they always had. The tools changed &mdash; Bloomberg terminals replaced paper filings &mdash; but the cognitive process remained largely the same.</p>\n\n<p>Large language models are the first technology that meaningfully changes the second problem. Not because they replace the analyst&rsquo;s judgment, but because they dramatically expand the scope of what a single analyst can investigate.</p>\n\n<h2 style=\"font-size: 26px; margin-top: 48px; margin-bottom: 16px; color: #1a1a2e;\">The Information Asymmetry Problem</h2>\n\n<p>Consider the information environment a fundamental analyst faces today.</p>\n\n<p>A typical S&amp;P 500 company produces a quarterly earnings call transcript of 8,000 to 12,000 words. Across the index, that&rsquo;s roughly 2,000 transcripts per quarter &mdash; somewhere around 20 million words every three months. A diligent analyst covering 30 companies reads perhaps 200 transcripts per quarter, including competitors and adjacent industries. That&rsquo;s 10% of the S&amp;P 500 alone, and it doesn&rsquo;t include the thousands of smaller public companies, international markets, or the vast universe of private company information.</p>\n\n<p>Add to this the explosion of alternative unstructured data. Glassdoor hosts millions of employee reviews. The USPTO publishes hundreds of thousands of patent applications annually. Customer review platforms generate continuous streams of sentiment data. SEC filings beyond earnings calls &mdash; proxy statements, 8-Ks, insider transaction filings &mdash; number in the tens of thousands per quarter.</p>\n\n<p>No human can process this. And until recently, no machine could either &mdash; at least not in a way that preserved the qualitative nuance that makes this information valuable.</p>\n\n<p>Traditional natural language processing could count word frequencies, score basic sentiment, and extract named entities. What it couldn&rsquo;t do was read an earnings transcript and notice that the CEO&rsquo;s answer about competitive dynamics subtly contradicted what the CFO said about pricing two questions later. It couldn&rsquo;t assess whether an employee review describing &ldquo;exciting but chaotic&rdquo; engineering culture was a positive or negative signal for long-term product development. It couldn&rsquo;t synthesize information across dozens of documents to form a coherent picture of management&rsquo;s strategic evolution over time.</p>\n\n<p>This is what has changed.</p>\n\n<h2 style=\"font-size: 26px; margin-top: 48px; margin-bottom: 16px; color: #1a1a2e;\">What LLMs Actually Enable</h2>\n\n<p>Modern large language models can process and reason about unstructured text in ways that are qualitatively different from prior NLP approaches. For fundamental research, several capabilities matter most.</p>\n\n<p><strong>Cross-document synthesis.</strong> An LLM can read five years of earnings transcripts from a single company and identify how management&rsquo;s narrative about R&amp;D spending has evolved &mdash; where the emphasis shifted, which projects stopped being mentioned, how the language around competitive positioning changed. This kind of longitudinal analysis across dozens of documents is extraordinarily time-consuming for a human analyst but plays to the strengths of models that can hold large contexts.</p>\n\n<p><strong>Inconsistency detection.</strong> When a company&rsquo;s CEO tells analysts that customer retention is &ldquo;extremely strong&rdquo; but employee reviews from the customer success team describe chronic understaffing and account churn, that&rsquo;s a signal. LLMs can systematically compare narratives across sources &mdash; management commentary, employee feedback, customer reviews, supplier references &mdash; and flag inconsistencies that might take a human analyst weeks to notice.</p>\n\n<p><strong>Scale without sacrificing depth.</strong> Perhaps most importantly, LLMs allow an analyst to conduct deep qualitative research across a much larger universe of companies. Instead of deeply understanding 30 companies, an analyst augmented by LLMs might maintain substantive qualitative coverage of 200 or 300 &mdash; not by reading less about each one, but by using technology to process the raw material and focusing human attention on interpretation and judgment.</p>\n\n<p>None of this means the technology is a substitute for human analysis. LLMs hallucinate &mdash; they generate plausible-sounding but factually incorrect statements, a tendency that is particularly dangerous in investment research where precision matters. They can miss context that a domain expert would catch immediately. They have no way to verify whether the patterns they identify are genuinely predictive or merely coincidental. And their training data introduces biases that may not be obvious to users.</p>\n\n<p>The right mental model is not &ldquo;AI analyst&rdquo; but &ldquo;AI research associate&rdquo; &mdash; a tool that dramatically expands the scope of investigation while requiring human oversight for every conclusion that matters.</p>\n\n<h2 style=\"font-size: 26px; margin-top: 48px; margin-bottom: 16px; color: #1a1a2e;\">What This Doesn&rsquo;t Change</h2>\n\n<p>It&rsquo;s worth being explicit about the limitations, because the hype cycle around AI in finance tends to obscure them.</p>\n\n<p>LLMs do not solve the fundamental problem of investment uncertainty. No amount of qualitative analysis &mdash; human or machine-augmented &mdash; can reliably predict which companies will outperform over the next decade. Markets are complex adaptive systems where the act of discovering an edge tends to erode it. Any technology that improves research quality will, if widely adopted, raise the baseline without necessarily creating persistent advantage for any individual user.</p>\n\n<p>LLMs also do not eliminate the need for primary research. Reading every earnings transcript ever published is valuable, but it&rsquo;s still secondary research &mdash; you&rsquo;re analyzing what management chose to say publicly. The most important insights often come from conversations that never appear in any document: a supplier mentioning that a company&rsquo;s payment terms have quietly lengthened, a former employee explaining why the VP of Engineering actually left, a customer describing the real reason they&rsquo;re evaluating competitors. This kind of primary qualitative research &mdash; what Philip Fisher called &ldquo;scuttlebutt&rdquo; &mdash; requires human relationships and judgment that technology cannot replicate.</p>\n\n<p>Finally, LLMs do not solve the behavioral challenges that make investing hard. Even if you identify a company with exceptional qualitative characteristics, you still need the conviction to hold through quarters of underperformance, the discipline to size positions appropriately, and the temperament to act when the market disagrees with your assessment. These are human challenges that no technology addresses.</p>\n\n<h2 style=\"font-size: 26px; margin-top: 48px; margin-bottom: 16px; color: #1a1a2e;\">The Emerging Landscape</h2>\n\n<p>Despite these limitations, the application of LLMs to fundamental research is accelerating. Several approaches are worth watching.</p>\n\n<p>Some firms are building internal tools that use LLMs to process earnings transcripts at scale &mdash; not just extracting sentiment scores (the old NLP approach) but generating structured assessments of management quality, competitive positioning, and strategic coherence across entire sectors. The quality of these assessments varies significantly and is heavily dependent on prompt engineering and domain-specific fine-tuning, but the best implementations are producing insights that experienced analysts find genuinely useful.</p>\n\n<p>Others are applying LLMs to alternative data sources that were previously too unstructured to analyze systematically. Employee review data, for instance, contains rich information about organizational health, but raw sentiment scores are notoriously noisy. LLMs can parse the actual content of reviews &mdash; distinguishing between complaints about compensation (which may reflect industry norms) and complaints about strategic direction (which may signal deeper problems) &mdash; in ways that traditional analysis could not.</p>\n\n<p>A third category involves using LLMs to maintain continuous qualitative monitoring of portfolio companies and potential investments. Rather than conducting deep-dive research at discrete intervals, these approaches process new information as it arrives &mdash; every filing, every transcript, every significant review &mdash; and flag changes that warrant human attention. This shifts the analyst&rsquo;s role from periodic investigator to continuous overseer.</p>\n\n<p>It&rsquo;s too early to know which of these approaches will prove most valuable, and there is a real risk that the technology is overhyped relative to its current capabilities. But the direction of travel seems clear: the tools for conducting qualitative research at scale are improving rapidly, and the investors who learn to use them effectively will have a meaningful advantage in information processing &mdash; even if that advantage is partially offset as adoption spreads.</p>\n\n<h2 style=\"font-size: 26px; margin-top: 48px; margin-bottom: 16px; color: #1a1a2e;\">The Human Layer Remains Essential</h2>\n\n<p>The most important thing to understand about LLMs in fundamental research is what they don&rsquo;t change: the need for judgment.</p>\n\n<p>An LLM can tell you that a company&rsquo;s employee sentiment has declined 15% over two quarters, with the sharpest drops in engineering and product management. It can flag that management&rsquo;s language about innovation has shifted from specific product mentions to vague aspirational statements. It can note that customer reviews increasingly describe the product as &ldquo;reliable but dated.&rdquo;</p>\n\n<p>What it cannot tell you is whether this pattern means the company is entering a period of strategic drift that will erode its competitive position &mdash; or whether it&rsquo;s the natural trough of a product cycle before a major platform refresh that management hasn&rsquo;t yet disclosed. That interpretation requires industry knowledge, competitive context, and pattern recognition that comes from years of experience studying businesses.</p>\n\n<p>The technology is a force multiplier for good fundamental analysts. It doesn&rsquo;t make bad analysts good. An analyst who doesn&rsquo;t understand the difference between a cyclical trough and secular decline won&rsquo;t be saved by having more data to misinterpret.</p>\n\n<p>This is ultimately why the application of AI to fundamental research is more interesting than its application to quantitative trading. Quantitative trading is a well-defined optimization problem where machines have clear advantages over humans. Fundamental research is a judgment problem where the value of technology lies in expanding the scope of human analysis, not in replacing it.</p>\n\n<p>The investors who will benefit most from this shift are those who already have strong analytical frameworks and deep domain expertise &mdash; and who now have tools that let them apply those capabilities across a much larger universe of companies. The technology doesn&rsquo;t change what good investing looks like. It changes how much of it you can do.</p>\n\n<hr style=\"border: none; border-top: 1px solid #ddd; margin: 40px 0 24px;\" />\n\n<p style=\"font-size: 14px; color: #888; font-style: italic; font-family: Arial, Helvetica, sans-serif;\">Chris Stark is the portfolio manager of Stark Fund, a concentrated equity strategy.</p>\n\n<p style=\"font-size: 13px; color: #999; font-family: Arial, Helvetica, sans-serif; line-height: 1.6;\">This content is for informational and educational purposes only and does not constitute investment advice, an offer to sell, or a solicitation of an offer to buy any security. All investments involve risk, including the possible loss of principal. Past performance is not indicative of future results.</p>\n\n</article>","excerpt":"AI in investing usually means algorithmic trading. The more interesting shift is happening on the other side of the industry — in the messy, judgment-heavy work of understanding businesses.","author":"Christopher Stark","tags":[],"readingTime":10,"scheduledPublishAt":null,"createdAt":{"_seconds":1771443541,"_nanoseconds":964000000},"publishedAt":{"_seconds":1771460185,"_nanoseconds":363000000},"status":"published","metaDescription":"Discover how Large Language Models are quietly revolutionizing fundamental","featuredImage":null,"updatedAt":{"_seconds":1771486654,"_nanoseconds":726000000}}