Top of Mind_ Gen AI_ too much spend, too little benefit_(2)(1)pdf
Top of Mind_ Gen AI_ too much spend, too little benefit_(2)(1)pdf
Page 1 of 320
Tyler Durden shared this file. Want to do more with it?
  1. GEN AI: TOO MUCH SPEND, TOO LITTLE BENEFIT? ISSUE 129 | June 25, 2024 | 5:10 PM EDT”$$$$$P$$$$$$$$$$ $$ $Global Macro ResearchInvestors should consider this report as only a single factor in making their investment decision. For Reg AC certification and other important disclosures, see the Disclosure Appendix, or go to www.gs.com/research/hedge.html.The Goldman Sachs Group, Inc.Tech giants and beyond are set to spend over $1tn on AI capex in coming years, with so far little to show for it. So, will this large spend ever pay off? MIT’s Daron Acemoglu and GS’ Jim Covello are skeptical, with Acemoglu seeing only limited US economic upside from AI over the next decade and Covello arguing that the technology isn’t designed to solve the complex problems that would justify the costs, which may not decline as many expect. But GS’ Joseph Briggs, Kash Rangan, and Eric Sheridan remain more optimistic about AI’s economic potential and its ability to ultimately generate returns beyond the current “picks and shovels” phase, even if AI’s “killer application” has yet to emerge. And even if it does, we explore whether the current chips shortage (with GS’ Toshiya Hari) and looming power shortage (with Cloverleaf Infrastructure’s Brian Janous) will constrain AI growth. But despite these concerns and constraints, we still see room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst. ““INTERVIEWS WITH:Daron Acemoglu, Institute Professor, MIT Brian Janous, Co-founder, Cloverleaf Infrastructure, former Vice President of Energy, Microsoft Jim Covello, Head of Global Equity Research, Goldman Sachs Kash Rangan, US Software Equity Research Analyst, Goldman Sachs;Eric Sheridan, US Internet Equity Research Analyst, Goldman Sachs ADDRESSING THE AI GROWTH DEBATEJoseph Briggs, GS Global Economics Research ONCE IN A GENERATION, GENERATIONCarly Davenport, GS US Utilities Equity Research AI: POWERING UP EUROPE Alberto Gandolfi, GS European Utilities Equity Research AI’S CHIP CONSTRAINTS Toshiya Hari, Anmol Makkar, David Balaban, GS US Semiconductor Equity Research FULL STEAM AHEAD FOR AI BENEFICIARIES Ryan Hammond, GS US Portfolio Strategy Research AI OPTIMISM AND LONG-TERM EQUITY RETURNSChristian Mueller-Glissmann, GS Multi-Asset Strategy Research WHAT’S INSIDEofAllison Nathan| allison.nathan@gs.com ...AND MORETOP MINDJenny Grimberg | jenny.grimberg@gs.com Given the focus and architecture of generative AI technology today... truly transformative changes won’t happen quickly and few—if any—will likely occur within the next 10 years. - Daron AcemogluSpending is certainly high today in absolute dollar terms. But this capex cycle seems more promising than even previous capex cycles. - Kash RanganAI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do. - Jim Covello[AI] dollars spent vs. company revenues... are not materially different than those of prior investment cycles. - Eric SheridanAshley Rhodes | ashley.rhodes@gs.com
  2. hElGoldman Sachs Global InvestmentResearch2Top of MindIssue 129Macro news and viewsUSJapanLatest GS proprietary datapoints/major changes in views•No major changes in views.Datapoints/trends we’re focused on•Fed policy; we expectquarterly Fed rate cuts beginning in September, for a total of two cuts this year.•Inflation; we expect core PCE inflation to stand at 2.7% yoy by Dec 2024 before converging toward 2% next year. •Growth; we thinkmost of the slowdown from the 4.1% pace of real GDP growth in 2H23 is here to stay given softer real income growth, lower consumer sentiment, and election-related uncertainty that could weigh on business investment.•Labor market, which is now fully rebalanced, likelymeaning thata material softening in labor demand would hit actual jobs. Latest GS proprietary datapoints/major changes in views•We now expectthe next BoJ rate hike in July(vs. Oct before) as the hurdle for the next hike is low given that it willlikely be only 15bp and the BoJ sees the policy rate as significantly lower than the current nominal neutral rate.Datapoints/trends we’re focused on•Japanese inflation; sequential core inflation has recently shown signs of weakness, but we expectcore inflation to remain above the BoJ’s target this year at 2.6% yoy. •Japan’s rising interest burden, which will likelybe manageable for households and corporates given that it is occurring against a backdrop of solid activity and steady wage growth.•Japan financial conditions, which continueto ease.Election uncertainty: a potential growth dragNFIB Small Business Uncertainty IndexJapanese households: net interest receiversInterest payments & receipts (lhs, ¥tn), net interest receipt (rhs, %of 2023 disposable income)Source: Haver Analytics, Goldman Sachs GIR.Source:Goldman Sachs GIR.EuropeEmerging Markets (EM)Latest GS proprietary datapoints/major changes in views•We raisedour 2024 UK GDP growth forecast to 0.9% (from 0.8%) following slightly above consensus April GDP data.Datapoints/trends we’re focused on•ECB policy; we expectthe next rate cut in Sept,thoughwe think a pause in the easing cycle is possible ifinflation and wage data surprise to the upside over the summer.•BoE policy; we expectthe BoE to embark on rate cuts in August on the back of renewed UK disinflation progress.•French snap elections (Jun 30), which couldresult in a fiscal expansion that would lead the debt-to-GDP ratio to rise.•UK general election (Jul 4), which will likelydeliver relatively similar fiscal outcomes irrespective of which party wins.Latest GS proprietary datapoints/major changes in views•We recently pushed backour PBOC easing forecasts by one quarter given ample near-term liquidity, and now expect a 25bp RRR cut in Q3 anda10bp policy rate cut in Q4.Datapoints/trends we’re focused on•China’s economy, which remainsbifurcated between strength in exports and manufacturingactivity and weaknessin housingand credit, coupled with very low inflation.•EM easing cycle; we think the fundamental case for further EM rate cuts remains strong,though the recentunwind in EM FX carry tradesfollowingelectoral surprises in Mexico, India, and South Africa could impede policynormalization.French election: upside risk to debt trajectoryFrench government debt, %of GDPChina: a bifurcated economyChina activity indicator, % change, yoy Source: Goldman Sachs GIR.Source: Haver Analytics, Goldman Sachs GIR.40506070809010011020102012201420162018202020222024Presidential election years 0.00.51.01.52.02.53.03.5024681012202420262028203020322034Additional interest receipt (lhs)Additional interest payment (lhs)Net (rhs)859095100105110115120201220152018202120242027ActualStatus quoDeadlockExpansion-25-20-15-10-5051015Property - New startsProperty - Sales volumeProperty - CompletionsProperty FAICement productionAuto sales volumeElectricity productionSteel productionFixed asset investment (FAI)Infrastructure FAIRetail sales (RS)Services Production IndexCatering salesIndustrial production (IP)IP - ManufacturingExportsManufacturing FAIOnline goods salesWe provide a brief snapshot on the most important economies for the global markets
  3. hElGoldman Sachs Global InvestmentResearch3Top of MindIssue 129The promise of generative AI technology to transform companies, industries, and societies continues to be touted, leading tech giants, other companies,and utilities to spend an estimated ~$1tn oncapex in coming years, including significant investments in data centers, chips, other AI infrastructure,and the power grid. But this spending has little to show for it so far beyond reports of efficiency gains among developers. And even the stock of the company reaping the most benefits to date—Nvidia—has sharply corrected. We ask industry and economy specialists whether this large spend will ever payoff in terms of AI benefits and returns, and explore the implications for economies, companies, and markets if it does, or if it doesn’t. We first speak with Daron Acemoglu, Institute Professor at MIT, who’s skeptical. He estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. And he doesn’t take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won’t occur nearly as quickly—or be nearly as impressive—as many believe. He also questions whether AI adoption will create new tasks and products, sayingthese impacts are “not a law of nature.” So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade. GS Head of Global Equity Research Jim Covello goes a step further, arguing that to earn an adequate return on the ~$1tn estimated cost of developing and running AI technology, it must be able to solve complex problems, which, he says, it isn’t built to do. He points outthat truly life-changing inventions like the internet enabled low-cost solutions to disrupt high-cost solutions even in its infancy, unlike costly AI tech today. And he’s skeptical that AI’s costs will ever decline enough to make automating a large share of tasks affordable given the high starting point as well as the complexity of building critical inputs—like GPU chips—which may prevent competition. He’s also doubtful that AI will boost the valuation of companies that use the tech, as any efficiency gains would likely be competed away,and the path to actually boosting revenues is unclear, in his view. And he questions whether models trained on historical data will ever be able to replicate humans’ most valuable capabilities. But GS senior global economist Joseph Briggs is more optimistic. He estimates that gen AI will ultimately automate 25% of all work tasks and raise US productivity by 9% and GDP growth by 6.1% cumulatively over the next decade. While Briggs acknowledges that automating many AI-exposed tasks isn’t cost-effective today, he argues that the large potential for cost savings and likelihood that costs will decline over the long run—as is often, if not always, the case with new technologies—should eventually lead to more AI automation. And, unlike Acemoglu, Briggs incorporates both the potential for labor reallocation and new task creation into his productivity estimates, consistent with the strong and long historical record of technological innovation driving new opportunities. GS US software analyst Kash Rangan and internet analyst Eric Sheridan also remainenthusiastic about generative AI’s long-term transformativeand returns potential even as AI’s “killer application” has yet to emerge. Despite big tech’s large spending on AI infrastructure, they don’t see signs of irrational exuberance. Indeed, Sheridannotes that current capex spend as a share of revenues doesn’t look markedly different from prior tech investment cycles (see pg. 15), and that investors are rewarding only those companies that can tie a dollar of AI spending back to revenues. Rangan, for his part, argues that the potential for returns from this capex cycle seems more promising than even previous cycles given that incumbents with low costsof capital and massive distribution networks and customer basesare leading it. So, both Sheridan and Rangan are optimistic that the huge AI spend will eventually pay off. But even if AI could potentially generate significant benefits for economies and returns for companies, could shortages of key inputs—namely, chips and power—keep the technology from delivering on this promise? GS US semiconductor analysts Toshiya Hari, Anmol Makkar, and David Balaban argue that chips will indeed constrain AI growth over the next few years, with demand for chips outstripping supplyowing to shortages in High-Bandwidth Memory technology and Chip-on-Wafer-on-Substrate packaging—two critical chip components. But the bigger question seems to be whether power supply can keep up. GS US and European utilities analysts Carly Davenport and Alberto Gandolfi, respectively, expect the proliferation of AI technology, and the data centers necessary to feed it, to drive an increase in power demand the likes of which hasn’t been seen in a generation (which GS commodities strategist Hongcen Wei finds early evidence of in Virginia, a hotbed for US data center growth).Brian Janous, Co-founder of Cloverleaf Infrastructure and former VP of Energy at Microsoft, believes that US utilities—which haven’t experienced electricity consumption growth in nearly two decades and are contending with an already aged US power grid—aren’t prepared for this coming demand surge. He and Davenport agree that the required substantial investments in power infrastructure won’t happen quickly or easily given the highly regulated nature of the utilities industry and supply chain constraints, with Janous warning that a painful power crunch that could constrain AI’s growth likely lies ahead. So, what does this all mean for markets? Although Covello believes AI’s fundamental story is unlikely to hold up, he cautions that the AI bubble could take a long time to burst, with the “picks and shovels” AI infrastructure providers continuing to benefit in the meantime. GS senior US equity strategist Ryan Hammond also sees more room for the AI theme to run and expects AI beneficiaries to broaden out beyond just Nvidia, and particularly to what looks set to be the next big winner: Utilities. That said, looking at the bigger picture, GS senior multi-asset strategist Christian Mueller-Glissmann finds that only the most favorable AI scenario, in which AI significantly boosts trend growth and corporate profitability without raising inflation, wouldresult in above-average long-term S&P 500 returns, making AI’s ability to deliver on its oft-touted potential even more crucial. Allison Nathan, Editor Email:allison.nathan@gs.comTel: 212-357-7504 Goldman Sachs &Co. LLC Gen AI: too much spend, too little benefit?
  4. hElGoldman Sachs Global InvestmentResearch4Top of MindIssue 129Daron Acemoglu is Institute Professor at MIT and has written several books, including Why Nations Fail: The Origins of Power, Prosperity,and Povertyand his latest, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. Below, he argues that the upside to US productivity and growth from generative AI technology over the next decade—and perhaps beyond—willlikely be more limited than many expect. The views stated herein are those of the interviewee and do not necessarily reflect those of Goldman Sachs.Allison Nathan: In a recentpaper,you argued that the upside to US productivity and, consequently, GDP growth from generative AI will likely prove much more limited than many forecasters—including Goldman Sachs—expect. Specifically, you forecast a ~0.5% increase in productivity and ~1% increase in GDP in the next 10 years vs. GS economists’ estimates of a ~9% increase in productivity and 6.1%increase in GDP. Why are you less optimistic on AI’s potential economic impacts?Daron Acemoglu:The forecast differences seem to revolve more aroundthe timing of AI’s economic impacts than the ultimate promise of the technology. Generative AI has the potential to fundamentally change the process of scientific discovery, research and development, innovation, new product and material testing, etc. as well as create new products and platforms. But given the focus and architecture of generative AI technology today, these truly transformative changes won’t happen quickly and few—if any—will likely occur within the next 10 years. Over this horizon, AI technology will instead primarily increase the efficiency of existing production processes by automating certain tasks or by making workers who perform these tasks more productive. So, estimating the gains in productivity and growth from AI technology on a shorter horizon depends wholly on the number of production processes that the technology will impact and the degree to which this technology increases productivity or reduces costs over this timeframe.My prior guess, even before looking at the data,was that the number of tasks that AI will impact in the short run would notbe massive. Many tasks that humans currently perform, for example in the areas of transportation, manufacturing, mining, etc., are multifaceted and require real-world interaction,which AI won’t be able to materially improve anytime soon. So, the largest impacts of the technology in the coming years will most likely revolve around pure mental tasks, which are non-trivial in number and size but not huge,either. To quantify this, I began with Eloundou et al.’s comprehensive studythat found that the combination of generative AI, other AI technology, and computer vision could transform slightly over 20% of value-added tasks in the production process. But that’s a timeless prediction. So, I then looked at another studyby Thompson et al. on a subset of these technologies—computer vision—which estimates that around a quarter of tasks thatthis technology can performcouldbe cost-effectively automated within 10 years. Ifonly 23% of exposed tasks are cost effective to automate within the next ten years, this suggests thatonly 4.6% of all tasks will be impacted by AI. Combining this figure with the 27% average labor cost savings estimates fromNoy and Zhang’sand Brynjolfsson et al.’sstudies implies that total factor productivity effects within the nextdecade should be no more than 0.66%—and an even lower 0.53% when adjusting for the complexity of hard-to-learn tasks. And that figure roughly translates into a 0.9% GDP impactover the decade. Allison Nathan: Recent studies estimate cost savingsfrom theuse of AI ranging from 10% to 60%, yet you assume only around 30% cost savings. Why is that?Daron Acemoglu:Of the three detailed studies published on AI-related costs, I chose to exclude the one with the highestcost savings—Peng et al.estimates of 56%—because the task in the study that AI technology so markedly improved was notably simple. It seems unlikely thatother, more complex, taskswill be affected as much. Specifically, the study focuses on time savings incurred by utilizing AI technology—in this case, GitHub Copilot—forprogrammers to write simple subroutines in HTML, a task forwhich GitHub Copilot had been extensively trained. My sense is that such cost savings won’t translate to more complex, open-ended tasks like summarizing texts, where more than one right answer exists. So, I excluded this study from my cost-savings estimate and instead averaged the savings from the other two studies.Allison Nathan: While AI technology cannot perform many complex tasks well today—let alone in a cost-effective manner—the historicalrecord suggests that as technologies evolve, they both improve and become less costly. Won’t AI technology follow a similar pattern? Daron Acemoglu:Absolutely. But I am less convinced that throwing more data and GPU capacity at AI models will achieve these improvements more quickly. Many people in the industry seem to believe in some sort of scaling law, i.e. that doubling the amount of data and compute capacity will double the capability of AI models. But I would challenge this view in several ways. What does it mean to double AI’s capabilities? For open-ended tasks like customer service or understanding and summarizing text, no clear metric exists to demonstrate that the output is twice as good. Similarly, what does a doubling of data really mean,and what can it achieve? Including twice as much data from Reddit into the next version of GPT may improve its ability to predict the next word when engaging in an informal conversation, but it won't necessarily improve a customer service representative’s ability to help a customer troubleshoot problems with their video service. The quality of the data also matters, and it’s not clear where more high-quality data will come from and whether it will be easily and cheaply available to AI models. Lastly, the currentarchitecture of AI Interview with Daron Acemoglu
  5. hElGoldman Sachs Global InvestmentResearch5Top of MindIssue 129technology itself may have limitations. Human cognition involves many types of cognitive processes, sensory inputs, and reasoning capabilities. Large language models (LLMs) today haveproven more impressive than many people would have predicted, but a big leap of faith is still required to believe that the architecture of predicting the next word in a sentence will achieve capabilities as smart as HAL9000in 2001: A Space Odyssey. It’s all but certain that current AI models won’t achieve anything close to such a feat within the next ten years. Allison Nathan: So, are the risks to even your relatively conservative estimates of AI’s economic impacts over the next 5-10 years skewed to the downside? Daron Acemoglu:Both downside and upsiderisks exist. Technological breakthroughs are always possible, although even such breakthroughs take time to have real impact. But even my more conservative estimates of productivity gains may turn out to be too large if AI models prove less successful in improving upon more complex tasks. And while large organizations such as the tech companies leading the development of AI technology may introduce AI-driven tools quickly, smaller organizations may be slower to adopt them. Allison Nathan: Over the longer term, what odds do you place on AI technology achieving superintelligence? Daron Acemoglu:I question whether AI technology can achieve superintelligence over even longer horizons because, as I said, it is very difficult to imagine that an LLM will have the same cognitive capabilities as humans to pose questions, develop solutions, then test those solutions and adopt them to new circumstances. I am entirely open to the possibility that AI tools could revolutionize scientific processes on, say, a 20-30-yearhorizon, but with humans stillin the driver’s seat. So, for example, humans may be able to identify a problem that AI could help solve, then humans could test the solutions the AI models provide and make iterative changes as circumstances shift. A truly superintelligent AI model would be able to achieve all of that without human involvement, and I don’t find that likely on even a thirty-year horizon,and probably beyond.Allison Nathan: Your colleague David Autor and coauthors haveshownthat technological innovations tend to drive the creation of new occupations, with 60% of workers today employed in occupations that didn’t exist 80 yearsago. So, could the impact of AItechnology over the longer term prove more significant than you expect?Daron Acemoglu:Technological innovation has undoubtedly meaningfully impacted nearly every facet of our lives. But that impact is not a law of nature. It depends on the types of technologies that we invent and how we use them. So, again, my hope is that we use AI technology to create new tasks, products, business occupations, andcompetencies. In my example about how AI tools may revolutionize scientific discovery, AI models would be trained to help scientists conceive of and test new materials so that humans can then be trained to become more specialized and provide better inputsinto the AI models. Such an evolution would ultimately lead to much better possibilities for human discovery. But it is by no means guaranteed.Allison Nathan: Will some—or maybe even most—of the substantial spending on AI technology todayultimately go to waste?Daron Acemoglu:That is an interesting question. Basic economic analysis suggests that an investment boom should occur because AI technology today is primarily used for automation, which means that algorithms and capital are substituting for humanlabor, which should lead to investment. This explains why my estimates for GDP increases are nearly twice as large as my estimates for productivity increases. But then reality supervenes and says that some of the spending will end up wasted because some projects will fail, and some firms will be too optimistic about the extent of the efficiency gains and cost savings they can achieve or their ability to integrate AI into their organizations. On the other hand, some of the spending will plant the seeds for the next, and more promising, phase of the technology. The devil is ultimately in the details. So, I don't have a strong prior as to how much of the current investment boom will be wasted vs. productive. But I expect both will happen. Allison Nathan: Are other costs of AI technology not receiving enough attention?Daron Acemoglu:Yes. GDP is not everything. Technology that has the potential to provide good information can also provide bad information and be misused for nefarious purposes. I am not overly concerned about deepfakes at this point, but they are the tip of the iceberg in terms of how bad actors could misuse generative AI. And a trillion dollars of investment in deepfakes would add a trillion dollars to GDP, but I don't think most people would behappy about that or benefit from it.Allison Nathan: Given everything we’ve discussed, is the current enthusiasm around AI technology overdone?Daron Acemoglu:Every human invention should be celebrated, and generative AI is a true human invention. But too much optimism and hype may lead to the premature use of technologies that are not yet ready for prime time. This risk seems particularly high today for using AI to advance automation. Too much automation too soon could create bottlenecks and other problems for firms that no longer have the flexibility and trouble-shooting capabilities that human capital provides. And, as I mentioned, using technology that is so pervasive and powerful—providing information and visual or written feedback to humans in ways that we don’t yet fully understand and don’t at all regulate—could prove dangerous. Although I don't believe superintelligence and evil AI pose major threats, I often think about how the current risks might be perceived looking back 50 years from now. The risk that our children or grandchildren in 2074 accuse us of moving too slowly in 2024 at the expense of growth seems far lower than the risk that we end up moving too quickly and destroy institutions, democracy,and beyond in the process. So, the costs ofthe mistakes that we risk making are much more asymmetric on the downside. That’s why it’s important to resist the hype and take a somewhat cautious approach, which may include better regulatory tools, as AI technologies continue to evolve.
  6. hElGoldman Sachs Global InvestmentResearch6Top of MindIssue 129Joseph Briggs addresses the AI productivity and growth debate, arguing that generative AI will likely lead to significant economic upside We have long argued that generative AIcould lead to significant economic upside, primarily owing to its ability to automate a large share of work tasks, with our baseline estimate implying as much as 15% cumulativegross upside to US labor productivityand GDP growth1following widespread adoption of the technology. A significant boost to US labor productivity from generative AIEffect of AI adoption on annual US labor productivity growth, 10y adoption period, ppSource: Goldman Sachs GIR. That said, substantial debate exists around generative AI’s potential macro impacts. Studies that assume generative AI will accelerate the development and adoption of roboticsor that view recent generative AI advances as foreshadowing the emergence of a “superintelligence”, for example, estimate even more upside to productivity and GDP than our baseline forecast. We see such outcomes as possible but premature since they generally assume AI advancements well beyond the frontier of current models.More notably, MIT economist Daron Acemoglu seesmuch more limited upside to US productivity and GDP than we expect, with his baseline estimates implying that generative AI will boost US total factor productivity (TFP) by 0.53% and GDP by 0.9% over the next 10 years (see pgs. 4-5). As we take similar approaches to assessing the economic impacts of generative AI, we explore what explains the large differences in our estimates. Breaking down the differencesWe find two main factors that explain the differences in our estimates versus those of Acemoglu. First, Acemoglu assumes that generative AI will automate only 4.6% of total work tasks, 1Our GDP estimate assumes that the capital stock evolves to match increased labor potential, which seems broadly validated by the sizable investment response aimed at facilitating the AI transition.2This figured is calculated by multiplying the labor share of output, 62%, by our 15% estimate of the AI upside to labor productivity and growth.3The quantitative contribution of different channels to the discrepancy between Acemoglu’s and our estimates depends on the order that they are considered in, with differences in exposure assumptions explaining more of the gap if differences in cost savings assumptions are considered firstand vice versa.To reduce this sensitivity, we consider both orderings and present the average contributions.as he estimates that only 19.9% of all tasks are exposed to AI and assumes that only 23% of exposed tasks will be cost effective to automate within the next ten years. In contrast, we assume that generative AI will automate 25% of all work tasks following the technology’s full adoption.Second, Acemoglu’s framework assumes that the primary driver of cost savings will be workers completing existing tasks more efficiently and ignores productivity gains from labor reallocation or the creation of new tasks. In contrast, our productivity estimates incorporate both worker reallocation—via displacement and subsequent reemployment in new occupations made possible by AI-related technological advancement—and new task creation that expands non-displaced workers’ production potential.Differences in these assumptions explain over 80% of the discrepancy between our 9.2%2and Acemoglu’s 0.53% estimates of increases in TFPover the next decade3.The remaining 20% of the gap reflects differences in cost savings and marginal productivity assumptions. For instance, Acemoglu assumes 27% cost savings based on two studies that he considers the most representative of AI’s real-world impact, but cost savings would rise to 36% if the full set of studies were considered. We are also more optimistic that AI will raise non-displaced workers’ output, largely because we expect AI automation to create new tasks and products. Differences in macro estimates mostly reflect differences in assumptions around tasks that can be profitably automated and the reallocation of labor to new tasksReconciling estimates of AI impact on GDP: Acemoglu (2024)vs. GS (2023), %Source: Goldman Sachs GIR. More widespread AI automation aheadSo, whose estimates regarding the share of automated tasks and new task creation—will more likely prove correct? We are very sympathetic to Acemoglu’s argument that automation of many AI-exposed tasks is not cost effective today, and may not become so even within the next ten years. AI adoption remains very modest outside of the few 0.30.50.70.81.31.52.42.42.9-10123456Much less powerful AISlower adoption (30 years)Slower adoption (20 years)Slighlty less powerful AINo labor displacementBaselineSlightly more powerful AIMore labor displacementMuch more powerful AIIncreased productivity of non-displaced workersLabor displacementReemployment of displaced workers0.3pp4.3pp0.9pp0.8pp2.4pp6pp15%0.5%9.2%0369121518BaselinetaskexposureOnly cost-effectivetasks todayBroadercost-savingestimatesExpandedproductionofemployedworkersAcemoglubaselineExposureassumptionsCost savingassumptionsReallocation/new taskcreationGSbaselineCapitaldeepeningOverallGDPTotal Factor Productivity (TFP)GDPAddressing the AI growth debate
  7. hElGoldman Sachs Global InvestmentResearch7Top of MindIssue 129industries—including computing anddata infrastructure, information services, and motion picture and sound production—that we estimate will benefit the most, and adoption rates are likely to remain below levels necessary to achieve large aggregate productivity gains for the next few years.This explains why we only raised our US GDP forecast by 0.4pp by the end of our forecast horizon in 2034 (with smaller increases in other countries) when we incorporatedan AI boost into our global potential growth forecasts last fall. When stripping out offsetting growth impacts from the partial redirection of capex from other technologies to AI and slower productivity growth in a non-AI counterfactual, this 0.4pp annual figure translates into a 6.1% GDP uplift from AI by 2034 vs. Acemoglu’s 0.9% estimate.AI adoption remains modest on average across industriesShare of US firms using AI by sector, %Source: Census Bureau, Goldman Sachs GIR.That said, the full automation of AI exposed tasks that are likely to occur over a longer horizon could generate significant cost savings to the tune of several thousands of dollars per worker per year. The cost of new technologies also tends to fall rapidly over time. Given that cost-saving applications of generative AI will likely follow a similar pattern, and that the marginal cost of deployment will likely be very small once applications are developed, we expect AI adoption and automation rates to ultimately far exceed Acemoglu’s 4.6% estimate.Labor reallocation and new task creation on the horizonWe also disagree with Acemoglu’s decision not to incorporate productivity improvements from new tasks and products into his estimates, partly given his questioning of whether AI adoption will lead to labor reallocation and the creation of new tasks. The historical record provides strong evidence that economic growth stems mainly from technology-driven reallocation of resources and expansion of the production frontier, and we anticipate that AI will raise output both by raising demand in areas where labor has a comparative advantage and by creating new opportunities that were previously technologically or economically infeasible. This dynamic clearly played outfollowing the emergence of information technology—which created new occupations like webpage designers, software developers, and digital marketing professionals and indirectly drove demandfor service sector workers in industries like healthcare, education, and food services—and is visible over a much longer horizon in recent workby MIT economist David Autor and coauthors. Using Census data, they find that 60% of workers today are employed in occupations that did not exist in 1940, with their estimates implying that the technology-driven creation of new occupations accounts for more than 85% of employment growth over the last 80 years.Automation of work tasks should generate significant economic value, particularly as costs declineValue of automating work task categories per worker, % of time (lhs), $ (rhs)Source: Goldman Sachs GIR. Technological creation of new opportunities is a main driver of employment and economic growthEmployment by new and pre-existing occupations, millionsSource: Autor et al. (2022), Goldman Sachs GIR. Accordingly, while we believe that Acemoglu’s relatively pessimistic assessment of generative AI’s economic potential highlights valid concerns that the macroeconomic impacts could be more backloaded than is commonly appreciated, we maintain that generative AI’s large potential to drive automation, cost savings, and efficiency gains should eventually lead to significant uplifts of productivity and GDP.Joseph Briggs, Senior Global EconomistEmail:joseph.briggs@gs.comGoldman Sachs & Co. LLCTel: 212-902-21630102030ConstructionAccommodation and Food ServicesTransportation and WarehousingWholesale TradeOther ServicesManufacturingRetail TradeAdmin/Support/Waste ManagementHealth Care and Social AssistanceAll IndustriesArts, Entertainment, and RecreationReal Estate and RentalFinance and InsuranceEducational ServicesProfessional, Scientific, and TechnicalInformationOctober 2023June 2024Next six months0500100015002000250001234Getting informationOrganizing, planning, prioritizingworkUpdating and using relevantknowledgeIdentifying objects, actions, eventsProcessing informationMonitoring processes, materials,surroundingsDocumenting/recording informationAnalyzing data or informationEvaluating info to determinecompliance w/ standardsScheduling work and activitiesPerforming administrative activitiesInterpreting the meaning ofinformation for othersEst. quantifiable characteristics ofproducts, events, info0255075100125150175200ProfessionalsManagersClerical & AdminProductionConstructionPersonal servicesTransportationTechniciansSalesCleaning servicesHealthFarmingTotalOccupations that did not exist in 1940Occupations that existed in 1940
  8. hElGoldman Sachs Global InvestmentResearch8Top of MindIssue 129AI investment has surged over the last several years...Actual and forecasted revenues by AI-exposed sector, index, 4Q19=100...and the market has significantly upgraded its AI investment expectations across the AI hardware stack...Change in consensus revenue forecasts since March 2023, $bn, annualizedDashed lines in this chart indicate consensus revenue forecasts.Source: FactSet, Goldman Sachs GIR.Source:FactSet, Goldman Sachs GIR....though much less so across the broader AI space so farChange in consensus revenue forecasts since March 2023, $bn, annualizedSource:FactSet, Goldman Sachs GIR.AI-related software investment isn’t yet visible in the US’ or other DMs’ official national accounts data... AI-related investment in software: national accounts, log index*, 3Q22=100*Shown as log index because software investment grows by different exponential rates across countries. Steady growth in investment would appear as a line with a constant slope, while accelerating growth would appear as a line with an increasing slope. Source: Haver Analytics, Goldman Sachs GIR....nor is AI-related hardware investment, suggesting that other factors are currently playing a larger role than AI in shaping the aggregate capex outlookAI-related investment in hardware: national accounts, logindex, 3Q22=100Source: Haver Analytics, Goldman Sachs GIR.However, manufacturers’ shipments for some AI-related components have surged...US nominal manufacturing sales, AI-related categories, index, Sept. 2022=100, 3m averageSource: Haver Analytics, Goldman Sachs GIR.801001201401601802002202402019202020212022202320242025SemiconductorsHardware Enablers (ex. semis)Software Enablers0501001502002503003504004505001Q242Q243Q244Q241Q252Q253Q254Q25SemiconductorsHardware Enablers (ex. semis)-500501001502002501Q242Q243Q244Q241Q252Q253Q254Q25Cloud providerData centers (real estate)Manufacturing equipmentMemorySecurityServers and networkingUtilities707580859095100105199920022005200820112014201720202023USCanadaJapanUK707580859095100105199920022005200820112014201720202023USCanadaJapanUK708090100110120130140150Dec-18Sep-19Jun-20Mar-21Dec-21Sep-22Jun-23Mar-24Electronic ComputersComputer Storage DevicesElectronic Components (incl. semiconductors)Communications EquipmentThe state of the AI transition...
  9. hElGoldman Sachs Global InvestmentResearch9Top of MindIssue 129...though this increase has not been uniform across the major developed economies, with the US leading the packDM nominal manufacturing sales, AI-related categories, index, Sept. 2022=100, 3m averageAI adoption remains muted on average across industries, with adoption likely to pick up only modestly over the next six months...Share of US firms using AI by sector, %Source: Haver Analytics, Goldman Sachs GIR. Source:Census Bureau, Goldman Sachs GIR....though adoption rates are much higher among technology industries and other digitally-enabled fields...Share of US firms using AI, top 15 subsectors, %Source: Census Bureau, Goldman Sachs GIR....and are expected to increase significantly acrossthese sectors over the next six monthsExpected change in share of firms using AI over the next six months, top 15 subsectors, ppSource: Census Bureau, Goldman Sachs GIR.Despite rising adoption rates, little evidence of net labor displacement from AI exists so far...Layoffs attributed to each in corporate announcements, 000sSource: Challenger, Gray & Christmas, Goldman Sachs GIR. ...with unemployment not looking markedly different across jobsUnemployment rate by AI exposure, %, 3m averageSource: Census Bureau, IPUMS, Goldman Sachs GIR. Special thanks to GS GIR global economist Devesh Kodnani for these charts, which were originally published in an April 2024 Global Economics Analyst. 405060708090100110120Dec-18Sep-19Jun-20Mar-21Dec-21Sep-22Jun-23Mar-24USUKGermanyJapanCanadaFrance0102030ConstructionAccommodation and Food ServicesTransportation and WarehousingWholesale TradeOther ServicesManufacturingRetail TradeAdmin/Support/Waste ManagementHealth Care and Social AssistanceAll IndustriesArts, Entertainment, and RecreationReal Estate and RentalFinance and InsuranceEducational ServicesProfessional, Scientific, and TechnicalInformationOctober 2023June 2024Next six months05101520Ambulatory Health ServicesInsurance Carriers and RelatedBeverage and Tobacco ManufacturingPerforming Arts/Spectator SportsReal EstateComputer and Electronic ManufacturingRetail TradeTelecommunicationsEducational ServicesSecurities/Commodities/FinancialProfessional, Scientific, and TechnicalMotion Picture and Sound RecordingInformationWeb Search/Libraries/ArchivesComputing/Data/Web HostingOctober 2023June 202402468101214161820Performing Arts/Spectator SportsBeverage and Tobacco ManufacturingRetail TradeAmbulatory Health ServicesReal EstateAdministrative and SupportInsurance Carriers and RelatedCredit Intermediation and RelatedProfessional, Scientific, and TechnicalInformationEducational ServicesMotion Picture and Sound RecordingSecurities/Commodities/FinancialWeb Search/Libraries/ArchivesComputing/Data/Web Hosting0246810121416May-23Jul-23Sep-23Nov-23Jan-24Mar-24May-24"Technological Change""Artificial Intelligence"3.03.23.43.63.84.04.24.4Jan-22May-22Sep-22Jan-23May-23Sep-23Jan-24May-24Top 20% most AI-exposed jobsAll other jobs...in pics
  10. hElGoldman Sachs Global InvestmentResearch10Top of MindIssue 129Jim Covello isHead of Global Equity Research at Goldman Sachs. Below, he argues that to earn an adequate return on costly AI technology, AI must solve very complex problems, which it currently isn’t capable of doing, and may never be.Allison Nathan: You haven’t bought into the current generative AI enthusiasm nearly as much as many others. Why is that?Jim Covello:My main concern is that the substantial cost to develop and run AI technology means that AI applications must solve extremely complex and important problems for enterprises to earn an appropriate return on investment (ROI). We estimate that the AI infrastructure buildout will cost over $1tn in the next several years alone, which includes spending on data centers, utilities, and applications. So, the crucial question is: What $1tn problem will AI solve? Replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I’ve witnessed in my thirty years of closely following the tech industry. Many people attempt to compare AI today to the early days of the internet. But even in its infancy, the internet was a low-cost technology solution that enabled e-commerce to replace costly incumbent solutions. Amazon could sell books at a lower cost than Barnes & Noble because it didn’t have to maintain costly brick-and-mortar locations. Fast forward three decades, and Web 2.0is still providing cheaper solutions that are disrupting more expensive solutions, such as Uber displacing limousine services. While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.Allison Nathan: Even if AItechnology is expensive today, isn’t it often the case that technology costs decline dramatically as the technology evolves?Jim Covello:The idea that technology typically starts out expensive before becoming cheaper is revisionist history. E-commerce, as we just discussed, was cheaper from day one, not ten years down the road. But even beyond that misconception, the tech world is too complacent in its assumption that AI costs will decline substantially over time. Moore’s law in chips that enabled the smaller, faster, cheaper paradigm driving the history of technological innovation only proved true because competitors to Intel, like Advanced Micro Devices,forcedIntel and others to reduce costs and innovate over time to remain competitive. Today, Nvidia is the only company currently capable of producing the GPUs that power AI.Some people believe that competitors to Nvidia from within the semiconductor industry or from the hyperscalers—Google, Amazon, and Microsoft—themselves will emerge, which is possible. But that's abig leap from where we are today given that chip companies have tried and failed to dethrone Nvidia from its dominant GPU position for the last 10 years. Technology can be so difficult to replicate that no competitors are able to do so, allowing companiesto maintain their monopoly and pricing power.For example, Advanced Semiconductor Materials Lithography (ASML) remains the only company in the world able to produce leading-edge lithography tools and, as a result, the cost of their machines has increased from tens of millions of dollars twenty years ago to, in some cases, hundreds of millions of dollars today.Nvidia may not follow that pattern, and the scale in dollars is different, but the market is too complacent about the certainty of cost declines.The starting point for costs is also so high that even if costs decline, they would have to do so dramatically to make automating tasks with AI affordable. People point to the enormous cost decline in servers within a few years of their inception in the late 1990s, butthe number of $64,000 Sun Microsystemsservers required to power the internet technology transition in the late 1990s pales in comparison to the number of expensive chips required to power the AI transition today, even without including the replacement of the power grid and other costs necessary to support this transition that on their own are enormously expensive. Allison Nathan: Are you just concerned about the cost of AI technology, or are you also skeptical about its ultimate transformative potential?Jim Covello:I’m skeptical about both. Many people seem to believe that AI will be the most important technological invention of their lifetime, but I don’t agree given the extent to which the internet, cell phones, and laptops have fundamentally transformed our daily lives, enabling us to do things never before possible, like make calls, compute and shop from anywhere. Currently, AI has shown the most promise in making existing processes—like coding—more efficient, although estimates of even these efficiency improvements have declined, and the cost of utilizing the technology to solve tasks is much higher than existing methods. For example, we’ve found that AI can update historical data in our company models more quickly than doing so manually, but at six times the cost. More broadly, people generally substantially overestimate what the technology is capable of today. In our experience, even basic summarization tasks often yield illegible and nonsensical results. This is not a matter of just some tweaks being required here and there; despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks. And I struggle to believe that the technology will ever achieve the cognitive reasoning required to substantially augment or replace human interactions. Humans add the most value to complex tasks by identifying and understanding outliers and nuance in a way that it is difficult to imagine a model trained on historical data would ever be able to do. Interview with Jim Covello
  11. hElGoldman Sachs Global InvestmentResearch11Top of MindIssue 129Allison Nathan: But wasn’t the transformative potential of the technologies you mentioned difficult to predict early on? So, why are you confident that AI won't eventually prove to be just as—or even more—transformative?Jim Covello:The idea that the transformative potential of the internet and smartphoneswasn’t understood early on is false. I was a semiconductor analyst when smartphones were first introduced and sat through literally hundreds of presentations in the early 2000s about the future of the smartphone and its functionality, with much of it playing out just as the industry had expected.One example was the integration of GPS into smartphones, which wasn’t yet ready for primetime but was predicted to replace the clunky GPS systems commonly found in rental cars at the time. The roadmap on what other technologies would eventually be able to do also existed at their inception. No comparable roadmap exists today. AI bulls seem to just trust that use cases will proliferate as thetechnology evolves. But eighteen months after the introduction of generative AI to the world, not one truly transformative—let alone cost-effective—application has been found.Allison Nathan: Even if the benefits and the returns never justify the costs, do companies have any other choice but to pursue AI strategies given the competitive pressures?Jim Covello:The big tech companies have no choice but to engage in the AI arms race right now given the hype around the space and FOMO, so the massive spend on the AI buildout will continue. This is not the first time a tech hype cycle has resulted in spending on technologies that don’t pan out in the end; virtual reality, the metaverse, and blockchain are prime examples of technologies that saw substantial spendbut have few—if any—real world applications today. And companies outside of the tech sector also face intense investor pressure to pursue AI strategies even though these strategies have yet to yield results. Some investors have accepted that it may take time for these strategies to pay off, but others aren’t buying that argument. Case in point: Salesforce, where AI spend is substantial, recently suffered the biggest daily decline in its stock price since the mid-2000s after its Q2 results showed little revenue boost despite this spend.Allison Nathan: What odds do you place on AI technology ultimately enhancing the revenues of non-tech companies? And even without revenue expansion, could cost savings still pave a path toward multiple expansion? Jim Covello:I place low odds on AI-related revenue expansion because I don't think the technology is, or will likely be, smart enough to make employees smarter. Even one of the most plausible use cases of AI, improving search functionality, is much more likely to enable employees to find information faster than enable them to find better information. And if AI’sbenefits remain largely limited to efficiency improvements, that probably won’t lead to multiple expansion because cost savings just get arbitraged away. If a company can use a robot to improve efficiency, so can the company’s competitors. So, a company won’t be able to charge more or increase margins. Allison Nathan: What does all of this mean for AI investors over the near term, especially since the “picks and shovels” companies most exposed to the AI infrastructure buildout have already run up so far?Jim Covello:Since the substantial spend on AI infrastructure will continue despite my skepticism, investors should remain invested in the beneficiaries of this spend, in rank order: Nvidia, utilities and other companies exposed to the coming buildout of the power grid to support AI technology, and the hyperscalers, which are spending substantial money themselves but will also garner incremental revenue from the AI buildout. These companieshave indeed already run up substantially, but history suggests that an expensive valuation alone won’t stop a company’s stock price from rising further if the fundamentals that made the company expensive in the first place remain intact. I’ve never seen astock decline only because it’s expensive—a deterioration in fundamentals is almost always the culprit, and only then does valuation come into play.Allison Nathan: If your skepticism ultimately proves correct, AI’s fundamental story would fall apart. What would that look like?Jim Covello:Over-building things the world doesn’t have use for, or is not ready for, typically ends badly. The NASDAQ declined around 70%between the highs of the dot-com boomand the founding of Uber. The bursting of today’s AI bubble may not prove as problematic as the bursting of the dot-com bubble simply because many companies spending money today are better capitalized than the companies spending money back then. But if AI technology ends up having fewer use cases and lower adoption than consensus currently expects, it’s hard to imagine that won’t be problematic for many companies spending on the technology today. That said, one of the most important lessons I've learned over the past three decades is that bubbles can take a long time to burst. That’s why I recommend remaining invested in AI infrastructure providers. If my skeptical view proves incorrect, these companies will continue to benefit. But even if I’m right, at least they will have generated substantial revenue from the theme that may better position them to adapt and evolve. Allison Nathan: So, what should investors watch for signs that a burst may be approaching? Jim Covello:How long investors will remain satisfied with the mantra that “if you build it, they willcome” remains an open question. The more time that passes without significant AI applications, the more challenging the AI story will become. And my guess is that if important use cases don’t start to become more apparent in the next 12-18 months, investor enthusiasm may begin to fade. But the more important area to watch is corporate profitability. Sustained corporate profitability will allow sustained experimentation with negative ROI projects. As long as corporate profits remain robust, these experiments will keep running. So, I don’t expect companies to scale back spending on AI infrastructure and strategies until we enter a tougher part of the economic cycle, which we don’t expect anytime soon. That said, spending on these experiments will likely be the one of the first things to go if and when corporate profitability starts to decline.
  12. hElGoldman Sachs Global InvestmentResearch12Top of MindIssue 129Kash Ranganand Eric Sheridan areSenior Equity Research Analysts at Goldman Sachscovering US software and internet, respectively. Below, they argue that while AI remains a work in progress, the large sums of money being put toward it should pay off, eventually.Allison Nathan: When we lastspokein July 2023, you were both very enthused aboutthe potential of generative AI. Are you just as optimistic today? Kash Rangan: Iamjust as enthusiastic about generative AI’s long-term potential as Iwasa year ago, and perhaps even more. The pace of technological change over the past 12 months has been mind-blowing, with hardly a week going by without reportsof a newer, and better, AI model. The infrastructure buildout has also greatlyexceeded expectations. Hyperscalers—large cloud computing companies that provide computing and storage services at scale—have spent $60-80bn in incremental capital above regular cloud capex on criticaltoolsfor building and training AI models. And rays of hope have emerged across several domains that demonstrate AI’s productivity benefits. In the creative domain, generative AI has produced new design ideas in minutes that previously would’ve taken many hours, shortening the time it takes to bring an idea to market. In the code development domain, AI has automated low-level code writing, freeing up developers to work on more complex and productive tasks. And in the customer support domain, ServiceNow—a digital workflow software company—has reported an 80% reduction in the average time it takes to resolve a customer service problem thanks to AI technology. That said, applications ultimately drive the success of tech cycles, and we have yet to identify AI’s “killer application”, akin to the Enterprise Resource Planning (ERP) software that was the killer application of the late 1990s compute cycle, the search and e-commerce applications of the 2000-10 tech cycle that achieved massive scale owing to the rise of x86 Linux open-source databases, or cloud applications, which enabled the building of low-cost compute infrastructure at massive scale during the most recent 2010-20 tech cycle. But this shouldn’t come as a surprise given that every computing cycle follows a progression known as IPA—infrastructure first, platforms next, and applications last. The AI cycle is still very much in the infrastructure buildout phase, so finding the killer application will take more time, but I believe we’ll get there. Eric Sheridan:I agree that the visibility into what this infrastructure buildout will translate into in terms of AI applications and adoption rates remains relatively low. And several notable issues at the application layer—such as AI chatbots “hallucinating” or giving false answers to user prompts—have calledinto questionthe scalability of generative AI. So, the technology is still very much a work in progress. But it’s impossible to sit through demonstrations of generative AI’s capabilitiesat company events or developer conferences and not come away excited about its long-term potential.Allison Nathan:It’s well known that Nvidia has benefitted massively in the current “picks and shovels” phase of the cycle. Are firms beyond Nvidia currently monetizing the gains from generative AI technology?Eric Sheridan:Nvidia has certainly garnered significant revenue as its graphics processing unit (GPU) chip has become the nerve center of AI systems. But the semiconductor industry more broadly has benefitted from the voracious need for chips. Cloud computing companies have also performed well owing to the enormouscomputing capacity required to train and run AI models, with the three large hyperscalers of Microsoft, Alphabet, and Amazon seeing an acceleration in revenue growth in the last quarter. So, capital is shifting into the AI theme, the theme and the capitalare aligning against the building, and many companies exposed to semiconductors and computing workloads are monetizing these gains.So, this is not just a Nvidia story.Allison Nathan: Are you concerned that the hundreds of billions of dollars inAIcapexbig tech firms are estimatedto spend in comingyearsis a sign ofirrational exuberance,and that thepayoff may be low or never come?Eric Sheridan:Those who argue that this is a phase of irrational exuberance focus on the large amounts of dollars being spent today relative to two previous large capex cycles—the late 1990s/early 2000s long-haul capacity infrastructure buildout that enabled the development of Web 1.0, or desktop computing, as well asthe 2006-2012 Web 2.0 cycle involving elements of spectrum, 5G networking equipment, and smartphone adoption. But such an apples-to-apples comparison is misleading; the more relevant metric isdollars spent vs. company revenues. Cloud computing companies are currently spending over 30% of their cloud revenueson capex, with the vast majority of incremental dollar growth aimed at AI initiatives.For the overall technology industry, these levels are not materially different than those of prior investment cycles that spurred shifts in enterprise and consumer computing habits. And, unlike during the Web 1.0 cycle, investors now have their antenna up for return on capital. They’re demanding visibility on how a dollar of capex spending ties back to increased revenues, and punishing companies who can’t draw a dotted line between the two. We saw this with Meta a few months ago when the company’s stock fell sharply after it announced plans to spend several billion dollars on AI, potentially disrupting its core business in the process, while offering little visibility into the eventual payoff. So, while I would never say I’m not concerned about the possibility of no payback, I’m not particularly worried about it today, though I could become more concerned if scaled consumer applications don’t emerge over the next 6-18m.Kash Rangan:Spending is certainly high today in absolute dollar terms. But this capex cycle seems more promising than A discussion on generative AI
We use cookies to provide, improve, protect and promote our services. Visit our Privacy Policy and Privacy Policy FAQs to learn more. You can manage your personal preferences, including your ‘Do not sell or share my personal data to third parties’ setting using the “Customize cookies” button below.