top of page
Writer's pictureMatt Wolodarsky

The Investor's Guide to Profiting from the AI Boom Part I

Updated: Oct 6

Explore the AI Landscape and Invest with Confidence

The four most expensive words in the English language are ‘This time is different’ - Sir John Templeton

In the late 1990s, the dot-com boom transformed both business and physical landscapes. Early fortunes were made by entrepreneurs and investors who eagerly rushed into this new frontier, armed with their virtual shovels. Underpinning the new fortunes being accumulated was the build out of the world's largest and most open network - the Internet.


The era saw a frenetic laying of fiber-optic cables by telecom companies that crisscrossed continents, penetrating both urban centers and remote areas. As the world rushed to connect, an unseen surplus of fiber-optic cable began to accumulate beneath the surface. Cisco emerged as the poster child benefactor of this network hardware spending, with revenue growth peaking at 55% YoY in 1999. After delivering an epic 1,000-fold return for its shareholders throughout the 1990s, Cisco became one of the most valuable companies in the world, with a market cap of $555 billion by the beginning of 2000.


The rapid expansion of broadband infrastructure followed an almost mythic trajectory: unbridled investment, massive build-out, and eventual overcapacity. The fallout from this overinvestment was harsh, leading to a wave of bankruptcies and consolidation in the telecommunications industry. Telecom operators drastically cut their capital expenditures after the Internet bubble burst. Cisco's layoffs and large inventory write-downs in 2001 marked the beginning of a prolonged decline, with a stock price never returning to its all time.


This phenomenon, termed the "dark broadband theory" by industry analyst Craig Moffett, highlighted a crucial misstep—building far beyond the immediate needs or utilization, which precipitated significant market correction.


Amid the broadband infrastructure spending bonanza, investing legend Bill Miller was finding his stride. As the sole portfolio manager of the Legg Mason Capital Management Value Trust since the early 1990s, Miller began investing in the technology sector in the latter half of the decade. Miller has an impressive record with his early bets on Amazon, Booking Holdings, eBay, and Google. Remarkably, he is the only portfolio manager to have beaten the market for 15 consecutive years, from 1991 to 2005, covering the dot-com boom and bust cycle.


Today, as artificial intelligence ushers in a new technology-driven boom cycle, history shows signs of repeating itself. This time the culprit is GPUs - graphics processing units, indispensable for training and running more capable AI models. Analyst Ben Thompson's "dark GPU theory" warns that today’s massive investments in GPUs, might soon outpace actual needs, echoing the broadband glut and potential market correction similar to the dot-com bust.


Here lies today's investor conundrum: How can we capitalize on the AI boom without falling into the cyclical traps of previous eras? Just as Bill Miller navigated the dot-com era with patient foresight, today's investors must enter the AI market with eyes wide open.


Miller's success provides key lessons: his contrarian approach, long-term investment horizon, and ability to adapt and diversify. Our challenge now is to discern real value amidst the hype, identify promising AI sub-sectors, and leverage the AI-driven market boom without succumbing to speculative bubbles.


Even if you are not interested in investing in the AI wave, investors of any stripe cannot ignore the implications of this once in a generation technology shifts. It is not an overstatement to say that, over time, whole industries and ways of doing business will be transformed. Any company you hold in your portfolio today that does not adapt in time, will be washed away.


In Part I of this series, explore the AI landscape and gain the context you need to thrive, while learning strategies to avoid the pitfalls that ensnared many during the dot-com bubble. In Part II, discover my investment approach, criteria for identifying AI winners, and my top long-term bets for the AI era.


The AI Opportunity: A Primer for Investors

Every couple of decades, the world and stock markets are shaken by transformative, general purpose technologies, with the technology du jour being Generative Artificial Intelligence (GenAI).

Investment strategies for the AI (GenAI) boom

Beginning with the railroad boom of the 19th century, moving to the electrification boom of the early 20th century, and more recently the PC (1980s) and dot-com (late 1990s) booms; these technology waves typically begin by delivering incremental productivity gains within existing workflows. For instance, electrification initially focused on replacing candles with electric lights in factories, boosting productivity and improving safety.


Phase two occurs at an inflection when sufficient complementary technologies have been developed to allow the full power of the general purpose technology to be applied in a more accessible way for the masses, and in a significantly less costly way. While the commercialization of the Internet in the later half of the 90's led to some incremental improvements, it took almost a decade for the iPhone to be invented, enabling the full potential of the Internet to be realized. The iPhone gave birth to Uber and Airbnb, and more then a decade later, most people cannot imagine living without the convenience of modern, mobile apps. In phase two, entirely new workflows get developed, unlocking "hockey stick" like productivity gains. 


Since the general public's first exposure to the power of GenAI, we have come a long way. On November 30, 2022 Open AI quietly released a beta version of its latest Chat GPT chatbot. We all know what happened next, Chat GPT went viral on social media and reached 100 million users within 2 months, beating out TikTok to become the fastest app ever to scale that quickly.


We are still in the early stages of GenAI's adoption curve. Existing and new technology companies are betting big on GenAI. They are integrating it into their existing solutions and/or building the complementary technologies that will help scale to billions and embed AI it into everything we do, consume and interact with.


GenAI is being integrated into existing workflows across various industries, enhancing productivity and freeing knowledge workers from mundane work. For example, AI models like GPT-4 are being used today in customer service chatbots, content creation, data analysis, and even software coding. In all cases, demonstrable value is being realized in the form of productivity gains or cost reductions.


As complementary technologies such as more powerful GPUs, specialized AI chips, advanced machine learning frameworks, and new energy and data sources continue to develop, the costs and likelihood of training even more capable Large Language Models (LLMs) are more feasible and certain. More capable models will lead to more economic value unlocked. McKinsey estimates the economic benefits of generative AI realized from applying it across knowledge worker activities alone will be $6.1 trillion to $7.9 trillion.


We are approaching a point where GenAI apps and services will become ubiquitous, similar to what happened with the launch of the iPhone and how it unlocked the full potential of the internet. More personalized and user-friendly interfaces, API integrations, lower computational costs, neural processing units (NPUs) powering AI tasks on phones and PCs locally, and no-code/low-code AI platforms are just a few examples of the technologies on the way from companies such as Apple, Microsoft, Qualcomm, Google and Amazon that will help pave the way for GenAI to reach billions.


How far are we away from phase two "hockey stick" productivity gains? And, where can retail investors find AI opportunity in the meantime?

While it took ten years for sufficient complementary technologies (i.e., the iPhone) to arrive for the Internet's full potential to be realized, it may not take as long for AI to reach the lucrative phase two of general purpose technology adoption. Sure, history rhymes but it is never a carbon copy.


The more substantive reasons for my optimism that AI will make more progress and be more impactful, in a shorter period of time is due to Internet blogger Tim Urban's law of accelerating returns. I wrote about this theory in my guide to investing in technology stocks. The theory goes like this: AI advancements are taking on a new trajectory of exponential growth thanks to the increase in compute power and how AI systems are programmed. We have come so far recently that we all now have access to an AI system that routinely exceeds human performance on standard benchmarks (Source: Artificial Intelligence Index Report 2024, Stanford University). Secondly, an AI system is programmed with the goal of improving its own intelligence. After several iterative improvements, AI is expected to become so smart that it will have an easier time learning and will therefore make bigger leaps in intelligence.


With the current capabilities of Chat GPT-4-turbo, we are likely closer to this frontier than most realize. Today, GPT-4 performs amazingly well on a variety of tests, including the Uniform Bar Exam where it scores higher than 90 percent of humans, and the Biology Olympiad which it beats 99 percent of humans. With the rumored soon to be released GPT-5, AI will be able to make even bigger leaps in intelligence advancements than ever before. As these leaps grow larger and happen more rapidly, Artificial General Intelligence continues to get smarter and soon reaches super intelligent levels. Tim Urban expresses this as the ultimate example of The Law of Accelerating Returns.

The truth is that no one knows how long it will take to realize AI ubiquity, or super intelligence levels. But, we don't have to let this uncertainty keep us on the sidelines. If you are willing to take on some moderate risk, participating now may help you profit from what could end up being the biggest productivity boosting general purpose technology ever!


The key to navigating this evolving landscape lies in understanding both the current trajectory of AI and the important, unanswered questions that remain. Investors who stay flexible, continuously reassess their views on how the AI ecosystem will develop, and are ready to adapt their strategies accordingly will be best positioned to seize opportunities within the AI sector and beyond. Embracing this uncertainty and leveraging known trends in AI can lead to successful investment outcomes, even in an unpredictable environment. To be better informed, let’s examine what we know about the AI ecosystem today and what remains uncertain.


Essential AI truths


1. "God-like" LLMs are becoming commoditized

As general-purpose large language models (LLMs) such as Chat-GPT, Llama and Gemini become more indistinguishable, the competitive advantage for companies purely developing the largest and most capable "God-like" LLMs diminishes. This doesn't necessarily mean there will be no value to the "God-Like" models and the companies that build them. AI will become one of the most important commodities to build and operate a successful business or government. It's more likely, as we’ve seen in the oil market, this commoditization will give rise to one or two extremely valuable companies selling these “commodities. And, the one or two eventual winners haven't been crowned yet. Yes, it appears Open AI has a lead, is well funded, and has a track record of successful launches. But, remember how Facebook emerged after MySpace and Friendster had first mover advantage. Similarly, the most well-capitalized AI models today might not necessarily become the ultimate winners, as open-source models such as Llama challenge the leading players closely.


For investors, opportunities extend beyond model builders. Winners in the AI ecosystem will include cloud service providers, AI platform and tooling companies, enterprise and consumer applications. As AI models become commoditized, these sectors will benefit greatly by providing essential infrastructure, platforms, and applications helping organizations get the most value from AI and spurring widespread adoption.


2. Increasing general-purpose model capability requires a massive amount of power, compute, and data

Due to scaling laws, general-purpose LLMs like OpenAI's ChatGPT will continue to become more capable, with step function improvements in their abilities. As more power, compute, and data are applied to their training, they will consequently become more valuable.

The computational power required to train LLMs has been growing exponentially. The total amount of compute required to train a single state of the art (SOTA) AI model has increased by 4.2x per year since 2010. Training GPT-3 was reported to have used approximately 10,000 Graphics Processing Units (GPU). It is expected that training GPT-5 would required approximately 250,000 GPUs. Stanford University's AI Index estimates indicate that the training costs for cutting-edge AI models have soared to new heights. For instance, it is estimated that OpenAI's GPT-4 required approximately $78 million in computing resources for training, whereas Google's Gemini Ultra incurred a training cost of about $191 million.


This rapid escalation in compute needs has driven demand for more power and sustainable energy practices. More GPUs means more energy needed. According to the investment firm Bernstein, worldwide demand for electricity from AI data centers could grow by 5% a year through 2030.


Large datasets are essential for training these models effectively. The more diverse and extensive the data, the better the model can generalize across different tasks and domains.


There appears to be no way around these scaling laws. Whether its  Sam Altman's efforts to secure $5 to $7 trillion in investment funding to expand global AI chip supply, rumors that Microsoft is turning to nuclear reactors to power its data centers, or the industry's experimentation with synthetic data; AI leaders and tech titans are trying to move mountains to meet the massive requirements of compute, power, and data in a world of scarcity.


As frontier model developers look to build more capable models, the demand for compute, energy and data will soar. Semiconductor investors may still gain from this demand. The need for vast and diverse datasets opens avenues for investment in data-centric companies. AI model builders with vast and high-quality datasets such as Meta and Google have a competitive edge in training more effective AI models. Tech investors may be lured into the the new energy plays that will be necessary to power all the required compute.


By focusing on these areas, investors can capitalize on the essential infrastructure and resources needed to support the future of AI advancements.


3. The declining costs to run models are making AI more affordable

Once general-purpose model builders overcome the initial costs of training a new, more capable model, the next challenge is managing the high costs of running the model.


Once a GenAI model is trained, it moves into the inference phase, where it uses its training to generate unique responses to new inputs. This stage is more about quick and efficient application of what the model has learned, rather than the intensive learning process involved in training. Think of it like the difference between learning how to solve math problems in a classroom setting (training) versus quickly working through a set of problems on a test that you've already been taught how to solve (inference). Each time anyone uses Chat-GPT, OpenAI incurs an inference cost.


While the costs to train a large, general purpose model are increasing; the opposite is occurring in the inferencing phase. Open AI's latest model GPT-4o, released in May 2024, is 83% cheaper to run than its predecessor GPT-4. Not only is it cheaper, but GPT-4o is 2X faster.


Investors can anticipate continued decreases in the costs of running LLMs due to several technological advancements and new, more efficient techniques. New hardware, such as Nvidia's recently announced Blackwell B200 tensor core chip, is expected to reduce AI inference operating costs and energy consumption by up to 25 times compared to older technology​. For local tasks, Apple plans to use on-device AI, which will lower costs by minimizing data transfer and cloud processing fees​. Additionally, smaller, more cost effective models are being adopted for specific tasks that they can handle as effectively as larger model.  


The declining costs of running GenAI will democratize access across all sectors of the economy, driving broader adoption and creating new market opportunities for AI companies to profit from. Investors should look for the AI hardware companies and cloud service providers that will help drive down inference cost. As the costs to run AI continue to plummet, a dizzying array of AI features will be integrated into existing software and new AI based products will come to market. Investors will need a sharp, discerning eye to sift through the noise.


4. The long tail of smaller, more specialized models

Beyond the battle for the foundational LLMs amongst the AI giants, a long tail of smaller and more specialized models has emerged, akin to the evolution seen in the PC industry. The first computers were enormous, expensive mainframes, accessible only to large organizations. The prevailing belief was that only a handful of these machines would ever be needed. However, the introduction of personal computers in the 1980s drastically changed the PC industry. PCs became more affordable and widespread, eventually leading to the development of even smaller and more versatile devices like smartphones. Today, computers range from powerful supercomputers to tiny embedded systems, each tailored to specific needs and applications.


Similarly, AI models are evolving to include not only large, general-purpose systems but also a diverse ecosystem of specialized models. While foundational models developed by AI giants serve broad applications, there is immense value in creating smaller, specialized models optimized for specific tasks such as medical imaging analysis. These specialized models address various requirements such as different modalities, performance, latency, cost, and security. For example, Apple's recent announcement of using a small model for on-device AI processing illustrates this trend, offering enhanced privacy, lower power consumption and reduced latency.


As with the PC revolution, the AI industry is poised to feature a wide array of models, each suited to different scales and purposes. This diversity of models will ensure that AI can be effectively integrated across various sectors, geographies and form factors; meeting the unique demands of each application and user base.


Investors can benefit from the long tail of smaller and more specialized AI models by investing in companies that serve as AI model purveyors. Companies such as Palantir, Microsoft, and Amazon, which remain model agnostic and focus on providing their customers with the right tool for the job, will likely see sustained demand. They offer the necessary infrastructure, tools, and platforms for developing, deploying, and managing a wide variety of AI models tailored to specific industries and use cases.


5. Developers will be more productive and coding will be more accessible, leading to more software in the world

No other profession has been more transformed by the GenAI wave then the software developer. AI-powered code completion tools like GitHub Copilot are being rapidly adopted by developers.


These tools provide intelligent code suggestions and automate repetitive tasks to help developers write code faster (developers that use GitHub Copilot code 55% faster than their counterparts who did not use Copilot), reduce errors, and streamline their workflow, allowing them to focus more on creative problem-solving and developing more software.


These same coding capabilities of LLMs that are making developers more productive are also making software development more accessible to anyone with an idea. Whether it will be by describing to a future version of Chat-GPT what sort of app you want to build or by using an AI no-code platform like ServiceNow or Copilot Studio from Microsoft, software development skills will become accessible to most of the global population.


The costs to develop software are dropping dramatically. This means all of the unmet software needs of information and frontline workers, medical professionals, educators, students and consumers around the world can and will be met at a fraction of the cost.


As hard as it may seem to believe, get ready for a lot more software in the world!


For investors, the explosion of in production software means more cloud infrastructure to host and manage the 10X in more apps will be needed. New software produces more data, which will need to be analyzed and stored somewhere. And, the new software will need to be secured, requiring more security software.


6. AI makes workers more productive and able to produce higher quality work

Recent studies over the past year have provided data demonstrating the productivity gains achieved by labor through the application of generative AI:


  • In a meta-review conducted by Microsoft, which analyzed the performance of workers using Microsoft Copilot or GitHub’s Copilot—both LLM-based productivity-enhancing tools—with those who did not use AI, it was discovered that Copilot users completed tasks in 26% to 73% less time than their counterparts without AI access.

  • According to a Harvard Business School study, consultants who had access to GPT-4 experienced a 12.2% increase in productivity, a 25.1% improvement in speed, and a 40.0% enhancement in quality compared to a control group without AI access.

  • The National Bureau of Economic Research found that call-center agents utilizing AI managed 14.2% more calls per hour compared to those who did not use AI.

  • A study on the impact of AI in legal analysis revealed that teams with access to GPT-4 significantly enhanced their efficiency and achieved notable quality improvements in various legal tasks, particularly in contract drafting.

  • AI access narrows the performance gap between low- and high-skilled workers. The Harvard Business School study found that both groups of consultants saw performance boosts with AI adoption, with lower-skilled consultants experiencing notably larger gains.

Advancements in complementary technologies, fine-tuning, retrieval-augmented generation (RAG), and new UI patterns will significantly enhance LLM performance and ease of adoption in the coming years. These improvements should lead to more productivity and adoption, and likely macro level productivity gains.


For investors, we are past the stage of answering the question: will generative AI deliver economic value?

The questions now are about how much impact will generative AI have on the top line and/or bottom lines on corporate America and the rest of the world? Will it be just a few basis points of profit gains, or can we expect substantial revenue and bottom line growth. We are seeing AI economic impact projections, including the one below from McKinsey, which we will have to remain skeptical about until we see the results show up in earnings calls. In short, I'm optimistic but will look for proof points along the way to temper my enthusiasm.

7. Organizations and their employees are adopting GenAI, monetization is growing

With all of the excitement about the possibilities and productivity gains being realized by information workers it should not be a surprise GenAI is being adopted in the workplace.


A recent survey conducted by Microsoft and LinkedIn found that 75% of global knowledge workers are using GenAI at work. Despite the high adoption rates, there is a gap between usage and monetization. In a survey by corporate expense-management and tracking company Ramp, about a third of companies pay for at least one AI tool, up from 21% a year ago. Many companies are still in the experimentation stage, and their employees are not waiting. Information workers are taking the initiative to incorporate GenAI into their workflows on their own.


Most companies are preparing for the future: By early 2024, 87% of companies surveyed by Bain reported that they were already developing, piloting, or deploying generative AI in some form. These initial implementations are primarily focused on software code development, customer service, marketing and sales, and product differentiation.

As depicted above, companies are growing their spend on GenAI YoY. However, a large part of that investment remains in development and/or pilots. Investors should be cautiously optimistic, but will require more of the GenAI spend to be allocated towards production use cases that deliver measurable business results to be fully convinced that the monetization will live up to the hype.


Unknows and risks

1. Where's the ROI on the massive AI infra investments being made?

With billions spent on Nvidia and related infrastructure to train and run frontier models, skeptics are unsurprisingly questioning when companies will see a return on their capital expenditures.


Venture-capital firm Sequoia recently estimated that the AI industry spent $50 billion on the Nvidia chips used to train advanced AI models in 2023, but generated only $3 billion in revenue. In a blunt post, Sequoi partner David Cahn asked: “Where is all the revenue?”. In the post he opines about a $600B gap between the revenue expectations implied by the AI infrastructure build-out, and actual revenue growth in the AI ecosystem.


For a healthy AI boom, we need this looming gap to close and we need AI winners beyond today's sole benefactor - Nvidia. We are starting to see some AI revenue materialize beyond Nvidia, as it has been credibly rumored that OpenAI hit $3.4B ARR in 2024. In their Q2 earnings report, enterprise software company ServiceNow reported accelerating AI revenue growth, including 11 deals worth $1M+ each for their AI product.

This isn't enough. We need to see a broadening of the AI spend beyond the chips to train the models. This includes the required spend on AI infrastructure to run the models at scale, the tooling and platforms to build new AI solutions, the incremental software spend on the AI capabilities being added to existing software, and new AI consumer based products that compete in the attention economy for advertising and subscription revenue. I remain confident this AI revenue broadening out will materialize, I'm just not sure on what timeline nor who the winners will be. I just have patience and well researched guesses.


2. When will we reach AGI?

Much hoopla, and investor fervor, is being made about humanity being on the verge of achieving Artificial general intelligence (AGI). While we've seen significant advancements in AI over the last couple of years alone, we have not reached the holy grail of AGI - a theoretical AI system that possesses cognitive abilities comparable to those of a human. Today's GenAI LLMs are still just predictive systems. They are far away from communicating and acting with the same nuance and sensitivity of humans. Some experts believe that reaching this level of intelligence is still several decades, or even centuries, away. The standard is something called the Turing test (named after its creator 20th-century computer scientist Alan Turing), which is achieved when AI's abilities are indistinguishable from those of a human.


While this distinction is academic, the consequences of it are profound. Today's LLMs are like having an intern follow you around that responds to your texts, answers questions based on information you've given it and conducts research on your behalf. They are powerful, yet they cannot consistently handle facts, engage in complex reasoning, or explain their conclusions. AGI is a more maximalist type of AI. It will be like having an always available genius savant with an unlimited knowledge base and the most advanced abilities to think critically, learn anything and solve novel problems independently.


It's the difference between a copilot that amplifies human abilities (GenAI) and a super intelligent agent (AGI) that replaces humans in even some of the most intellectually challenging jobs. GenAI will make information workers way more productive than those that don't use it. It will help humans schedule their time and respond to emails, write reports or prepare presentations, code, or even answer complex questions by synthesizing information from multiple sources. AGI will discover new drugs to treat diseases, diagnose complex medical conditions, provide legal representation, give economic advice to governments, do complex accounting audits, and produce unique artwork.


AGI could revolutionize industries in ways that are currently unimaginable, making today’s advancements seem incremental by comparison

Investors must temper their enthusiasm about today's GenAI opportunities with a realistic understanding of where we are and the uncertainty on when, and to some extent if, AGI will be achieved in our lifetime. While GenAI is enhancing productivity and efficiency across various sectors today, it remains a powerful tool rather than a true replacement for human intellect. Don't expect GenAI to deliver massive job displacement in every sector and job type in the economy.


Investors must not allow the hype to get the better of them. Cautious optimism is key. Investing in GenAI will likely yield impressive returns, but like most new technologies GenAI will hit a peak of inflated expectations (if we haven't already). What happens next can be painful. If you are not familiar with the Gartner hype cycle, we will have to get past the trough of disillusionment before society realizes that the full potential of AI is being realized.



And, AGI won't happen like a light switch. We won't go from not having AGI to having it the next day. The path to AGI will be slow and incremental. Pay attention to major capability achievements in the models, as potential inflection points to invest in the sectors that will be most impacted.


One recent notable advancement was the improvements GPT-4o made in performance with audio inputs. GPT-4o is able to respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. This is an underrated step toward much more natural human-computer interactions. As a result, GPT-4o can observe tone, multiple speakers, or background noises, and can output laughter, singing or emotional expression. With humans no longer able to tell the difference between a talking human and a talking computer, areas such as customer service, healthcare, entertainment and media, gaming (e.g., non-player characters) and personal development and wellness (e.g., virtual therapists and life coaches) could all see massive value creation in the coming years.


With each major model breakthrough, investors should ask themselves what sectors and companies stand to benefit the most? And, whether all elements for the breakthrough to be applied and monetized are in place.


3. What will be the impact of AI on the consumer market?

The future impact of Generative AI on the digital consumer is uncertain. The integration of AI into consumer hardware products like smart home devices, wearables and even home objects (like picture frames, TVs, toys and mirrors) is just beginning, and it’s unclear how consumers will respond. This creates potential shifts among consumer device market leaders such as Apple, Amazon, Google, Meta and Samsung. For example, Meta seems to be onto something with the integration of its LLM Llama into its Ray-Ban smart glasses. Llama will power the virtual assistant in the glasses, using voice and camera input to complete complex tasks. But, we are still in the experimental phase and questions remain about what direction will ultimately scale to billions of users.


Beyond ChatGPT, which seems stuck at 100M weekly active users, consumer use of AI products is limited. This is raising legitimate questions about whether AI in the consumer space will be a thing. For AI based consumer experiences to thrive, they must deliver significant benefits to take on the likes of Facebook, X, Netflix and TikTok in the attention economy. Long term, AI consumer apps will need to hold people's attention at the level of these incumbents to take their share of subscription and advertising revenue.

But, here we are. Chat GPT has arrived on the scene like an earthquake, ready to impact the consumer technology market at levels possibly not seen since the arrival of the iPhone. It's a common pattern, new consumer technology giants get built in the aftermath of a major enabling technology shift. What will be the AI-era equivalents of mobile-first platforms like Instagram, Facebook, Snap, and Uber?


We are starting to see early examples of LLM-native applications that are attracting and retaining a dedicated audience, and in some cases displacing legacy incumbents. Early stage companies like Perplexity for search, Character.ai for companionship, Midjourney for image generation, Suno and Udio for music generation, and Luma, Viggle and Pika for video generation. These companies are demonstrating the potential of LLM-native applications to attract and retain a meaningful audience and in some cases, displace modern incumbents. 


However, if you think it's early in the enterprise AI market, we are even earlier in the consumer AI market. Few consumer AI app categories with real depth and value have emerged, leaving a bunch of wanna be consumer apps with thin wrappers built around Chat GPT.


With the rise of multimodal capabilities (audio, video, etc.) and new AI-driven interactive experiences such as synthetic media, autonomous agents and intelligent non-player characters in video games, there is a lot of innovation in the consumer tech market ahead of us. AI will not only transform our favorite activities like socializing, video games, entertainment, shopping, and travel, but also enable us to discover and create new ways for people to connect, play, purchase, and explore the world.


Investors will need to be patient and skeptical as multiple consumer AI product companies will likely IPO over the next few years and legacy consumer companies adapt with AI to enhance the lives of consumers. For consumers to fully embrace AI there is plenty to still figure out, small issues :) such as privacy, ethical considerations, user interface, and the full utility of general-purpose AI assistants.


4. Are we in a bubble, and if yes, when will it burst?

Let me start answering this question by saying this is not a question that can be answered definitively. Let's lean on some sage advice from investing legend Howard Marks to point us in a productive direction on this unanswerable question:


"We never know where we're going, but we sure as hell ought to know where we are. Where is the market in its cycle? Is it depressed or elevated? When it's depressed, the odds are in the buyer's favor, and when it's elevated, the odds are against him. And it's really as simple as that. You should distinguish between markets that are high in their cycle and markets that are low. You should vary your behavior on that basis. You should take more risk when the market is low in its cycle and less risk when the market is high in its cycle."
- Howard Marks, OakTree Capital

With the tech-heavy Nasdaq and benchmark S&P 500 hitting record highs as of July 5th 2024, we are clearly not in a depressed market. When asked to describe today's market as either depressed or elevated, the choice is clear. The time to take more risk was two years ago. In the elevated market we are in today, logic dictates we should follow Howard's general advice and take less risk.


However, I believe there is more nuance to evaluating the current market cycle and in employing risk management during this transformative era. A brief history lesson of the last platform shift driven market frenzy will help set a good context for us to explore.


In 2000, the dot-com bubble burst, ending a five-year stretch of surging valuations and heightened investor excitement over the promise of the Internet. This period of exuberance channeled a significant amount of venture capital into internet startups, while the Nasdaq index climbed from below 1,000 to over 5,000 between 1995 and 2000. By 2002, however, the index had dramatically fallen to 1,139, a nearly 80% decrease.

Between 1995 and 2000, the Nasdaq Composite skyrocketed 5X. It reached a price–to-earnings ratio of 200, dwarfing the peak price–to-earnings ratio of 80 for the Japanese Nikkei 225 during the Japanese asset price bubble in 1991.


We are nowhere near those levels today. By comparison, Wall Street firm Jeffries shared earlier in July valuation measures of a 27 large-cap basked of AI stocks. This basket of AI stocks is made up of chip design companies (e.g., Nvidia), cloud service providers (e.g., Microsoft), chip foundries (e.g., TSM) and capital equipment for chip manufacturers (e.g., ASML). The Wall Street firm said this basket of large-cap AI stocks are trading at an average of 28 times consensus expectations for their 2025 earnings.

“The bubble will likely get bigger before deflating,” Jeffries analyst Edison Lee on July 7th, 2024

While I do not plan on risking my hard earned capital on the prospects of the current AI frenzy reaching the epic proportions of the dot-com bubble and the words of some random analyst, I am of the general belief that this market has some room to run. Not necessarily to the levels reached at the height of the dot-com boom, but some room. I do not expect everyone to share this belief. In fact, you shouldn't take the word of some random investing blogger. Do your own research. Consider your risk tolerance and your willingness to hold tight through volatility. Just like the dot-com, it won't be an "up and to the right" journey.


To navigate this uncertainty, volatility, and likely bubble territory we need some sound principles to guide our behavior. Otherwise, we are lost and at risk of letting our emotions get the better of us.


Principles and investment strategies for the frenzied AI (GenAI) boom

1. Embrace impermanence

Everything that has a beginning has an ending. Make your peace with that and all will be well.

- Buddha


There may be no greater truth in financial markets than the cyclical nature of economic cycles. The Asian "economic miracle" led to the Asian financial crisis of 1997; the dot-com boom of the late 1990s resulted in the crash of 2000; the housing bubble preceded the credit crisis, which in turn was followed by a remarkable bull market starting in 2009. In 2020 as Covid 19 spread, the market fell 34 percent in just twenty-three days before rebounding nearly 40 percent in the ensuing weeks driven by technology stocks.


With the current AI boom underway, a correction is inevitable. The extent of this correction will depend on how inflated AI builder and benefactor stocks become. This inevitability is not a bug but a feature of the market. Investors must acknowledge this and be prepared by navigating sector changes and market fluctuations; and by diversifying to minimize risk.


2. Adapt

Life is lived forward, but understood backwards.

- Soren Kierkegaard


The search wars of the dot-com era heated up in 1995 with the arrival of AltaVista and Yahoo. WebCrawler and Lycos, both university projects, were the main players at the time. The space became crowded very quickly, with AskJeeves and Excite joining the party the next year. Many expected one of these incumbents to emerge as the winner take all. Three years later, out of left field, a new and unexpected player emerged. Armed with their better mousetrap Google absolutely obliterated the competition over the next few years.


The graveyard of early movers

Will the same pattern play out in this early stage of the AI era? Will early leaders such as OpenAI, Meta and Anthropic be made irrelevant by some new company in three years? With the obscene capital requirements to build a better model, probably not. But, history reminds us of the importance of adapting in the early stages of a major platform shift. These early stages are fraught with uncertainty. Nobody knows what the true impact of the new technology will be once the receipts are counted. And, certainly no one knows who the long term winners will be. No retail investor in 2000 would have had the opportunity to invest in two of the eventual winners of the Internet sector, and currently two of the largest companies in the world, Google and Meta.


Investors must stay informed about the latest advancements and continuously reassess their views on how the AI ecosystem will develop. We must be willing to let go of our worldview and aligned bets when tectonic shifts occur.

Be prepared to reallocate investments as the ecosystem evolves and new leaders emerge.

3. Diversify

The only investors who shouldn't diversify are those who are right 100% of the time.

- Sir John Templeton


Diversification is crucial in any investment strategy, particularly in high-growth, potentially bubble-prone sectors like AI. As the AI industry undergoes its rapid industrialization phase, there are plenty of ways to diversify across the AI ecosystem - from physical infrastructure, to hardware, to AI platforms, and the enterprise/consumer application layer. Also, invest beyond the AI builders. Look for companies who are benefitting by applying AI to transform their industries, grow their revenue and increase their profit margins.


Investment strategies for the AI (GenAI) boom

4. Focus on fundamentals

The stock market is filled with individuals who know the price of everything, but the value of nothing.

— Phillip Fisher


Today's AI frenzy feels different than past bubbles. Unlike the public companies driving the parallel dot-com bubble, many of the public AI builders today, such as Palantir, Google, Microsoft and ServiceNow, are profitable. Moreover, AI leaders Meta and Amazon flexed their confidence in remaining profitable with their recent moves to pay a dividend to investors. Their CFOs wouldn't have approved a dividend without confidence that, despite massive CAPEX, cashflow would continue to grow over the next decade.


As investors in this AI era, we don't have to choose between growth at all costs vs. fundamentals. Look for companies with strong cashflows and financial health, credible and sustainable AI monetization, and the forming of a strong moat.


5. Be skeptical and look for real value

The art of being wise is the art of knowing what to overlook

- Philosopher William James


On December 3rd, 2001, the world witnessed the debut of the Segway PT — Personal Transporter — on Good Morning America. It became an instant sensation, generating unprecedented hype. John Doerr, an early investor in Google and Amazon, boldly predicted it would surpass the internet in significance and reach $1 billion in revenue faster than any product in history.


In a highly ironic conclusion to the story, nearly ten years after the Segway was first launched to much hype, the CEO of the company James Heselden fell off a cliff to his untimely death while riding a Segway. It was a morbid chapter in the story of an invention that was supposed “to be to the car what the car was to the horse and buggy”. It didn’t take long after its launch to realize that the Segway was never going to have a market beyond mall cops and tour groups. The well proven method of walking around is just fine, thank you very much.


In this new era of overhype, investors must remain skeptical to distinguish real value from inflated promises. We've seen this movie before. Rapid advancements and widespread media coverage inflate expectations, leading to overvalued stocks and unsustainable business models. Eventually the euphoria is replaced by a more sober and bearish view. To avoid being left holding the bag, we need to find companies that are solving meaningful real-world problems, in a unique and scalable way.


Before pouncing on companies that promise to transform industries or solve life's inconveniences with their cutting-edge AI tech, be skeptical. Make sure their solutions solve a real use case and won't be easily replaced. Don't just take their word for it; listen to what their customers are saying, review evidence of the value being created, and observe if customer contracts are being expanded or extended. Pay more attention to customer case studies and growing KPIs; and less to pilot agreements and partnership announcements.


6. Be Patient and long-term oriented

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long term

- Roy Amara


We are still in the early innings of the AI era. For society, businesses, consumers and investors to realize the full potential of AI, patience will be critical. Uncertainty and volatility will be the norm for much of the long technology cycle ahead of us.


Returning to our historical analogy, it took a decades long multi-stage cycle for the Internet to reach its full potential. The conception and feasibility stage of the Internet kicked off in the late 1960s with the launch of ARPANET. Funded by the U.S. Department of Defense, ARPANET was the groundbreaking network that revolutionized communication by enabling resilient, decentralized data sharing among military and academic institutions, paving the way for the modern internet.


It took another couple of decades, new supporting technologies and standards, for the Internet to fully mature. By the late 80s, the PC boom and proliferation of modems, set the scene for accelerating incremental improvements in Internet connectivity and access. However, it wasn't until the early 90s that the industry coalesced around the World Wide Web as the widely accepted framework for accessing the Internet, with the browser as its entry point, creating fertile ground for commercialization on a global scale.


While the bubble may have burst in 2000, causing the leading market indices to fall by 50% or more between 200 and 2003, it was not the end of the story. Beginning in 2008 we saw the start of a new bubble that some refer to as Web 2.0. We reached the final stage of the technology lifecycle, where the market winners and losers become clear. In the era of Web 2.0 it was Google, Facebook (now META) and a persistent Amazon.


Although I believe AI commercialization won't take decades due to the laws of accelerating returns and the billions in funding AI builders have today, patience is still required for the process to unfold. Not just at a sector macro level, but with the individual companies we bet on to be winners in this era.


A great example of the patience needed to stay focused on the long term are Nick Sleep and Qais Zakaria, co-founders of the very successful Nomad Investment Fund and their remarkable journey with Amazon. Starting Nomad on the eve of the 9/11 attacks, they navigated initial market upheavals and focused on long-term fundamentals rather than short-term market noise.


In 2005, Sleep and Zakaria identified Amazon as a quintessential example of their investment philosophy. Jeff Bezos's relentless focus on reducing costs, passing savings to customers, and reinvesting in new businesses resonated deeply with their belief in "scale economies shared." Recognizing the potential of Amazon Prime and its ability to drive customer loyalty and sustainable growth, Nomad began aggressively buying Amazon shares at around $30 each.


Despite the 2008 financial crisis, which saw Nomad's assets halve, Sleep and Zakaria remained steadfast. They continued to invest in Amazon, among other high-quality businesses, reaping enormous rewards as the company thrived. By 2018, Amazon's meteoric rise accounted for a significant portion of Nomad's assets, prompting Sleep to sell half his stake to mitigate concentration risk.


Sleep and Zakaria are masters of impulse control. How else could they have held Amazon for sixteen years, watching it become a 100-bagger in their portfolio (from $30 to over $3,000 per share)? They grasped the essential truth that deferring gratification and focusing on long-term outcomes is beneficial. However, understanding this principle intellectually is not sufficient. Equally important, they followed a strategy that they were confident in and that kept their impulses in control during tumultuous times.


AI is a transformative technology, but realizing its full potential will take time. Investors should be prepared for a long-term commitment and have the patience through periods of volatility and uncertainty.


7. Look for asymmetric opportunities

Look down, not up, when making your initial investment decision. If you don’t lose money, most of the remaining alternatives are good ones.

- Joel Greenblatt


To succeed during the AI boom, investors must resist the irrational risk-reward relationships typical of bubble mania and seek out asymmetric opportunities. Our best bets will be companies where the potential upside significantly outweighs the potential downside. These opportunities offer a favorable risk-reward ratio, promising potential gains that are much larger than the possible losses.


A prototypical, asymmetric asymmetric opportunity today is Amazon. They are making parallel bets on AI, have high upside potential in its AWS business, but limited downside if GenAI does not pan out. Amazon has a broad array of franchises that protect investor's capital from unrealized upside.


To protect capital I look for companies with a margin of safety, which is a built-in cushion allowing some losses to be incurred without major negative effects. It is basically a safety net for investing.


In practical terms, for each investment I ask myself: What are the chances that an investment in the company will result in a significant capital loss over the next five years?


I want investments that I am considering investing in to have a low likelihood of a significant price decline five years out. I can withstand short term volatility but will walk away from an investment idea if it feels like there is too high a risk of significant capital loss.


Stay tuned for Part II: Investment approaches and opportunities in the era of AI. Part II will outline my approach to investing in the AI opportunity, criteria for finding the AI winners, and my bets for long-term winners of the AI era.


Follow me on twitter @OwlWealthy to be notified when Part II published

383 views0 comments

Comentarios


bottom of page