The technology industry has spent heavily on all things AI, from training large language models (LLMs) to building up the infrastructure required to meet demand. Investments across a range of sectors with exposure to the AI boom, including cloud computing, chips, data centers and the power grid, have lifted economic growth and supported financial markets through a period of global uncertainty. One estimate from McKinsey & Co. suggests that demand for new and updated digital infrastructure will require an estimated $19 trillion in investments through 2040. Much of this capital will come from institutional investors. Supply-demand dynamics, the impact of new innovations, and the pace of adoption will help guide investors as they determine how to allocate their exposure to AI.

This episode of The Outthinking Investor explores growth opportunities and potential challenges across the AI ecosystem. Experts discuss sectors that stand to benefit from AI, intense demand for AI infrastructure, managing obsolescence risk, and whether AI can deliver on expectations for productivity and returns.

Our guests are:

  • Richard Waters, Technology Writer-at-Large for the Financial Times
  • Owen Hyde, Managing Director and Equity Research Analyst at Jennison

Learn more about the AI boom by visiting Jennison’s AI Resource Center.

Episode Transcript

>> The Nobel Memorial Prize in Economic Sciences is steeped in history. Recipients tend to receive the award years or even decades after their accomplishments, in part to ensure their work truly has lasting impact. This year, the award was shared by three economists for their research on how innovation drives economic growth. The committee highlighted that technological innovation has driven sustained economic growth for the past two centuries. One of the recipients, Joel Mokyr, commented on how the discovery of new phenomena has led to extraordinary innovations, such as genetic engineering and artificial intelligence. He described the past two decades as an age of radical innovation. Soaring market valuation of tech companies echoes that optimism. That's also supported the wealth effect, or how stock market gains drive consumer spending. In fact, some estimates put the wealth effect related to AI at around one half of a percentage point of GDP. That's close to the wealth effect of the dot-com bubble, which was probably around 0.6%. AI could deliver on the promise of productivity, but how long will it take to see a return on the investment? OpenAI alone expects to invest over $1 trillion in data centers. Will history show that AI was the foundation for today's age of radical innovation and continued sustained economic growth? Are current hopes and market valuations reflecting true potential? And will invested assets become increasingly validated by -- or disconnected from -- companies' profitability? To understand today's investment landscape, it's important to know how we got here. This is The Outthinking Investor, a podcast from PGIM that examines the past, the present-day opportunities, and the future possibilities across global capital markets. Two experts will discuss the potential impact of artificial intelligence. Owen Hyde is Managing Director at Jennison, covering technology stocks for the large-cap growth team. Richard Waters is a Technology Writer-at-Large at the Financial Times. Seventy-five years ago, Alan Turing published a now-famous paper titled Computing Machinery and Intelligence and Conceptualized Artificial Intelligence. It took a few decades before machine learning and large language models started to take hold. And now, generative AI applications have triggered a frenzy. Richard Waters explains.

 

>> What we're seeing right now is a full-scale AI race, and it's as though fuel has been poured onto this fire because two or three years ago, I think, even after ChatGPT came out, there was a certain amount of reserve in parts of the tech industry. There were questions about how effective this technology was going to be. There were questions about the risks and the potential regulation. We were hearing, for instance, Sam Altman, the CEO of OpenAI, going around the world, talking to politicians, saying, look, we will go slowly. We will go carefully because we know this is potentially a transformative technology, and he didn't want to overextend. I think if we now look at the present and all restraints are off. It's a really extreme change in the environment that these companies are working in.

 

>> The pace of change for AI has been truly remarkable, even for some of the early researchers of AI. As Owen Hyde explains, it's been developing at such a rapid pace that it's skewed how the adoption cycle has evolved.

 

>> If all the progress on the model stopped today and they didn't get any better from here, I think you'd have multiple years of product integration work to do that would make almost all the products we use today that much better. The models have moved so fast that, really, all of our behaviors are lagging quite behind. And so, the combination of those two things is pretty exciting.

 

>> Science and medicine are two areas that were originally thought to be major consumers of AI. One of the first use cases was the warp speed of developing COVID vaccines.

 

>> We're seeing early signs that they're going to be big consumers of this technology, but you have to remember there's also a lot of legal and regulatory question marks around how that data is used. I think they're going as fast as they can down this path, just given that they have a lot of restrictions on what they can do with data privacy and being compliant across the board. So I think, ultimately, they will be major beneficiaries of this. It's just a question of time and how they're able to tackle some of the hurdles that prevent them from moving maybe as fast as they'd like, but they have to operate within the framework they have.

 

[ Music ]

 

Markets have been so bullish on the potential applications for AI that one might even call it irrational exuberance. There are also clear challenges ahead. The two most obvious ones are securing vast amounts of energy by building out power sources and expanding the grid and acquiring specialized chips needed to run the planned data centers. Another key challenge is recruiting top talent.

 

>> You look longer term and you think about who is going to have the capabilities to build a sustainable lead over the long term, and no surprise, that Silicon Valley, some of the biggest companies led by Google, has sucked up most of the talent in AI, and it's led to an incredible scramble because whereas three or four years ago, only Google, OpenAI, and one or two other smaller companies were really focused on generative AI, now it seems every big tech company has decided this is a core capability that they need. And it's led to the most incredible pursuit of the best brains. And so, Meta, I think Mark Zuckerberg has become the bidder of last resort for AI talent in Silicon Valley, if you like. He's bidding up salaries to incredible levels, paying huge signing bonuses into the hundreds of millions of dollars to get the best talent. And I think everybody is now frantically trying to retain and attract those people that can keep this machine running because the advanced research needed to keep the large language model advances going is really at the cutting edge. I think the real paradox here, and it is a real paradox in large language models, is that it takes this incredibly specialized skill to produce these effects, and yet, it's getting harder and harder for any of these companies to maintain any differentiation. One week, it'll be a new OpenAI model that's on top. The next week, it'll be Google, and then, the next week, it'll be DeepSeek, the Chinese company that shocked everybody at the beginning of 2025 with an extremely low-priced model, managing to achieve great results. And so, for all the talent that's being thrown at it, I think the question is going to be, who can actually generate a meaningful differentiation that they can charge for?

 

>> That's a question for investors who may be wondering how to allocate their exposure to AI. Will the evolution of the AI landscape continue to look like a classic arms race? And if so, does that mean we're likely to see that to the victor go the spoils?

 

>> Is it a winner-take-all market at the silicon layer, so in the semiconductor layer? Is it a winner-take-all market at the cloud computing layer, the public cloud layer? Is it a winner take all market inside of the model layer or the application layer? I don't think it is. So far, if I just take them one by one, NVIDIA has obviously dominated the semiconductor space, but there is a real competitive threat from folks like AMD and Avago and others. Now, we think that folks like NVIDIA will continue to have the highest market share, and we think that expectations around the size of that market have gone one direction. And we think that there's probably a very large opportunity for multiple winners in that space over time, just because it is growing so fast and there is such a shortage as it relates to AI accelerators. And frankly, there's different use cases for different types of chips, so there is some market segmentation that will happen there over time. That might be a winner-take-most market depends on which part of the accelerator market we're looking at. And then, if you move to the public cloud space, I mean, that's certainly divided amongst multiple players, whether it's Google Cloud, Microsoft Azure, AWS; Oracle is also playing in the space.

 

[ Music ]

 

And then there's a whole handful of smaller clouds that are focused on just providing GPU compute, so I don't think that's a winner-take-all market. And then, when you get to the model layer, I think that that might be the more centralized. Of all of them, I think there might be somewhere between three and four companies competing to be at the frontier model phase for quite some time. I mean, you have obviously OpenAI; you have Anthropic; you have Google; you have XAI. And then there's a whole host of companies that are in the open source space, a lot of them outside of the U.S. And so, I think there's a pretty healthy competitive environment there, and I think they are -- frankly, I've watched them leapfrog each other every three to six months. So I think that's almost the most competitive of all the spaces. So to me, I don't think that's a winner take all either. And then, if I look at the application space, I don't see a really clear emerging winner there yet. And that tends to be the most fragmented sort of use case by use case in a market by end market, whether you're talking about consumer or sales and marketing application or a back-office system for tracking expenses. These are all very fragmented market. We'll see how it evolves. This is obviously one of the most dynamic spaces I've ever seen, so things change very quickly.

 

>> Another important aspect of how AI evolves is its impact on the employment level and the nature of employment. With previous technology shifts, we've often seen that the nature of work changes, but the demand for work doesn't significantly decline.

 

>> The big fallacy is always that there are only a certain amount of jobs to be done. And once machines can do some of those jobs, then humans won't be needed anymore. What we've seen, though, is that as automation brings down the cost of work, as it becomes easier to do things you couldn't do before, so companies do more of it because the return that they need on the investment they're making falls. If it becomes cheaper for an investment company, for instance, to produce, say, individualized investment reports for every single customer they have, because AI can simply generate those, then they'll do that. That's the big potential here; that companies will do more; that productivity will rise because they won't need a lot more people to produce all this extra output, and employment will stay steady or certain types of employment will go up and certain types will go down, which happens with every technology shift. What we haven't seen yet is what are these new market opportunities? What are the new products, and who will be able to monetize them with the very, very early stages of that?

 

[ Music ]

 

>> In fact, there may even be growing awareness among companies and senior management that reducing the workforce shouldn't necessarily be the goal. Competitive forces will always be at play.

 

>> All of a sudden, if you've reduced your headcount dramatically, I think you're at a disadvantage. So I think maybe people's first gut instinct is to think, oh, well, you know, it's going to be a real problem for employment, but all it takes is for your competitor to not take that action, instead just have more output, and all of a sudden you have a totally different competitive dynamic on your hands. So I think maybe we're just thinking about the first order rather than the fullness of time.

 

>> It seems reasonable to look for cost savings from AI adoption, given the huge capital expenditures companies are making, particularly big tech companies. Current spending is at a much higher level than optimistic forecasts just a few years ago. For 2026 alone, data center spending is projected to be nearly half a trillion dollars, and that's just for the small group of large cloud companies. Will this stress the financial stability for these companies or the technology sector or the global economy? That ultimately hinges on the return on investment.

 

>> Well, the good news is that the hyperscalers, the biggest tech companies, are awash in cash, and I think if you come back to Microsoft, Microsoft's free cash flow is still increasing at a healthy pace. Even though it's devoting close to $100 billion a year in capex now. It's maintaining a stock buyback at something like $40 billion a year, and it still has room to grow its free cash flow. And so, that's the optimistic picture for these big tech companies. It's a strategic imperative, and they've got the money to support this. Of course, whether that dampens their returns longer term is a different question. However, for some companies, this is going to create strain, and I think you only have to look at Meta, which missed the early stages of the generative AI boom and has now decided that this is existential. They have no choice. They really need to redirect massive amounts of money to try and build the next AI agents that we're all going to want to use online. And so, for Meta to contemplate $100 billion of capex in 2026 is extraordinary. That is 50% of their entire revenue that they're expecting to generate that year. Even for a company as rich as Meta, that has led them to look at external investments. They're looking at bringing in partners, whether it's private equity or other kinds of vehicles. Across the AI landscape now, we're seeing all these new innovative financing ideas around special purpose vehicles, around partnerships where people are creating all these new investment channels to bring money into some of these bigger companies to maintain this boom, and we don't yet know how much financial fragility that might introduce into the system. We don't know who these outside investors are. We don't know how strong these vehicles are going to be, and obviously, we don't know what kind of returns these companies are going to make longer term to support this kind of capex.

 

[ Music ]

 

>> A key factor here is the potential for rapid obsolescence. Technology booms may have helped drive global economic growth, but they've also, typically, led significant overcapacity. That happened with railroads in the late 1800s and then with fiber optic cables a century later during the dotcom bubble. For artificial intelligence, the current boom is around data centers and specialized chips.

 

>> So roughly 50% of capital investment by the big cloud companies is directed towards servers. And those servers, most of the value, is in the chips. So what we're seeing is just a race to develop the core capability to train the next AI models and to deliver the next result when you're talking to ChatGPT. Now, that infrastructure has a shelf life, and what we don't know yet is how short it is. And there's a kind of bifurcation opening up here because on the one hand, the race between the chip companies to develop better processes is not slowing down. NVIDIA is bringing out a new generation every year and is doing very well at obsoleting its older models, if you like, making them less valuable because everybody wants the latest new models. On the other hand, what we're seeing is the cloud companies are extending the useful lives of their assets. They're essentially assuming their servers will have a longer life and they're depreciating them over a longer period of time. Over the last two to three years, we've seen all the big cloud companies extend the lives of their equipment from around two and a half, three years to more like five years. That's great for their profits because it means they're writing down less each year, but it does mean that a lot of that equipment at the end of five years, what will it be worth? Will it be worth anything like what it is now? Well, if there is some kind of discontinuity in the market, if there is a bust, if the capacity is not needed in the short term, will that same capacity retain its value? That's what we don't know. You know, the fiber optic cable that was laid at the end of the 90s ended up having a lot of value because it just lay there as dark fiber in the ground. But as soon as demand really came back, it was waiting to be used. And so, that dark fiber has had value. Over the longer term, these AI chips may not.

 

[ Music ]

 

>> One of the biggest costs of doing business for the tech companies is research and development, and quite a lot of that goes to coding. This is also an area of rapid adoption, so much so that college grads with computer programming degrees are now experiencing high levels of unemployment. That's a major turnaround from just a couple of years ago, and it demonstrates how difficult it is to forecast the evolution of AI. It also shows us that tech companies seem to have been one of the best fits for AI adoption so far.

 

>> Their productivity has gone up dramatically. And so, I think the return on R&D spent at a lot of these tech companies is going to go higher because they're able to do more with the same amount of resources. I think the natural outcome of that is that you're going to get faster product releases and higher quality products as a result of this. And much more software will be written across the board that can show up in a plethora of different ways. That can be faster product releases for someone like Meta or Google. It can mean a lot of different things, so that's one way. And clearly, in the cloud ecosystem, their business model is to build data centers and then put that compute inside of that data center, and then, turn around and rent that out to their customers. This is a brand-new type of infrastructure that they get to turn around and rent to their customers once they install that capacity. And that's sort of where we are right now. We are in that phase where we're building out this new layer of infrastructure. It's expensive to build up front, but then you have a long stream of cash flows that come on the other side of that. And so, that's really where the most dollars are being deployed is on the infrastructure side, and we're expecting that to yield pretty strong returns over the next few years, just as you get a lot of this infrastructure in the ground and there's very high demand to rent that today.

 

[ Music ]

 

>> What does the future hold for the big tech companies as AI continues to evolve? Two things to watch for are the delay in realizing return on investment and how disruptive AI ends up being on their current business model.

 

>> When I talk to tech executives now, there's a much bigger acceptance that this is a disruptive change. We're seeing a whole new generation of software companies that are AI native. These are companies that are built on AI. They're not existing software companies trying to graft on a few AI features or models. They are companies that are fundamentally different. Companies, we've seen them first in the area of coding, software coding. Products like Cursor that have become incredibly popular because they're simply devised with an AI model at their heart and a whole new way of delivering a service. And so, as we start to see those things happen, and obviously we've seen it with ChatGPT, which is an entirely new thing that has just drawn attention and eyeballs away from some of the biggest consumer internet services out there. Every other period of disruption, we see this. We see the big companies in denial far longer than they should be. It's very, very hard for them to comprehend when what seems like such a solid market, such a strong grip on their customers, can just start to change with customer behavior when people just start to shift their attention to new things, and I'm just starting to see that. I'm just starting to see that awareness and that fear of you, like, creeping in, and I think the next couple of years are going to be a real test of how many of these new services can reach critical mass and which of the big existing companies are going to come through it unscathed.

 

>> Owen Hyde sees great potential for investment opportunity across the AI landscape, and for a few areas in particular.

 

>> It's AI infrastructure, and then, I think the semiconductor side, and those are the two areas where folks are getting the most exposure, and power is another big theme; but the hyperscalers are clearly in a good position here, as are the semiconductor supply chain and networking guys, and that's how most folks are getting their AI exposure today. And the earnings growth is supporting those moves, it's not as though evaluation has outstripped historical norms in that respect. Their earnings are above historical norms because they're making a lot of money building infrastructure, but the multiples themselves are well within historical ranges there. They've been in these ranges for some time. The earnings growth has been pretty material.

 

>> Leading companies will likely accept that AI is here to stay, in some form or another, and figure out how to reimagine processes to optimize their adoption.

 

>> How do they co-opt the new ideas, the new technologies, keep the new talents inside the current organizations? You'll see stresses inside the big companies. As they first assume they can absorb the technology into their existing businesses, then they will try to readjust, to change, to overhaul their systems, to change their hiring practices, to carve out new operations, to try to keep up with the startups who are really gathering momentum, and it's those kinds of changes, as they creep through the corporate world, that take time, and it's extremely hard for a big company to do. That's probably the next three to five years of tech industry evolution here, if you like. Which companies are going to work out how to both internalize the technology, but then carve out the processes and the ideas and the products, and really withstand any kind of disruption from outside, if you like?

 

[ Music ]

 

>> Markets have been very confident about the long-term potential for AI, giving companies a lot of leeway on their capital expenditure in the short-term. But at some point, the long-term will become the short-term, and companies will need to justify their capex. Thanks to Owen Hyde and Richard Waters for their insights on the risks and opportunities that artificial intelligence presents to investors and to the broader economy. The Outthinking Investor is a podcast from PGIM. Follow, subscribe, and if you like what you hear, go ahead and give us a review. If this episode piqued your interest in AI, explore more insights at Jennison's AI Resource Center. Discover expert perspectives, research, and thought leadership by clicking on the link in the show notes.

 

>> Hello, Daleep Singh here, Vice-Chair and Chief Global Economist at PGIM. I'm excited to announce some upcoming changes for the Outthinking Investor podcast. Starting in 2026, I'll be stepping in as your new host, bringing you fresh perspectives from experts and policymakers from across the globe. Together, our goal is to cut through the noise and break down how geopolitics, technology, and global forces are reshaping markets in real time. At PGIM, we're all deeply grateful for your continued support as listeners and valued members of our community. Please stay tuned.

 

[ Music ]

 

Past performance is not a guarantee of future results. All investments involve risk, including the loss of capital. PGIM is not acting as your fiduciary. The contents are for informational purposes only, are based on information available when created, and are subject to change. It is not intended as investment, legal, or tax advice and does not consider a recipient's financial objectives. This podcast includes the views and opinions of the authors and may not reflect PGIM's views. PGIM and its related entities may make investment decisions that are inconsistent with the views expressed herein. This podcast should not be reproduced without PGIM's prior written consent. No liability is accepted for any direct, indirect, or consequential loss that may arise from any use of the information contained in or derived from this podcast. This material is not for distribution to any recipient located in any jurisdiction where such distribution is unlawful. PGIM is the global asset management business of Prudential Financial Inc., which is not affiliated in any manner with Prudential PLC, Inc., incorporated in the United Kingdom or with Prudential Assurance Company, a subsidiary of M&Gplc incorporated in the United Kingdom, Copyright 2025. The PGIM logo and The Rock symbols are service marks of PGIM's parent and its related entities registered in many jurisdictions worldwide.

 

 


5028258