Beth Kindig is the CEO and Lead Tech Analyst for the I/O Fund and delivers weekly in-depth tech stock analysis to her free newsletter subscribers. Sign up here to receive free weekly stock tips. I/O Fund has a cumulative 3-year audited return of 47%, beating Ark and the majority of Wall Street funds over four audit periods in 2020, 2021 and 2022.
Nvidia ’s management team will focus on the H200 in the upcoming earnings call, but make no mistake, we will end this year in full-on Blackwell territory. The new architecture is at the forefront of training and inference for trillion+ parameter models.
More than, I called CUDA the moat for Nvidia’s AI data center story, yet should that moat become breached, the company’sNvidia is the world’s leading GPU design company, which bears reminding since such little emphasis in Wall Street is placed on what the designs intend to solve. For those paying close attention, there are clues that the company’s fast and furious data center growth will see a second wind with Blackwell.chip, left, and the Hopper GPU chip, right, during the Nvidia GPU Technology Conference in San Jose, California, US, on Monday, March 18, 2024. Dubbed the Woodstock festival of AI by Bank of America analysts, GTC this year is set to draw 300,000 in-person and virtual attendees for the debut of Nvidia's B100. Photographer: David Paul Morris/BloombergLast quarter in fiscal Q4, Nvidia reported growth of 265%. Last quarter is likely to be peak growth for the company. We pointed this out three months ago when Even if we see a beat and raise, the slowing growth in the second half will be hard to overcome due to high comps. As mentioned in the introduction, Nvidia will begin to lap some stellar quarters come the October CY2024 quarter as the growth in October of CY2023 was 205.5% YoY.” At time of writing, the revenue estimates for Nvidia point to growth of 242%. A beat/raise this quarter is not likely to flow through to a higher growth rate in H2 compared to what we saw in Q4 and what we will see in Q1. Therefore, even if Q1 inches slightly past fiscal Q4 tomorrow evening, we have hit peak growth. Typically, a growth investor should be cautious when a company hits its peak growth rate after a drastic rise in the stock price. Here is a chart we publishedquarter is not likely to flow through to a higher growth rate in H2 compared to what we saw in Q4 and what we will see in Q1 Trump Prosecutor Fani Willis Easily Wins Her Democratic Primary While Judge In Georgia Case Reelectedstarting at $30,000 to $40,000 but will have more expensive memory components with HBM3e. As long as margins remain within range, this will not be consequential considering Nvidia is posting organic growth.which is where rapid growth is bought rather than earned. The quality of Nvidia’s growth is much better than what tech investors are used to, and this is predominately why Nvidia stock is resilient . As supply/demand becomes more balanced, it will be Nvidia’s aggressive product road map, which in many cases is designed to compete with themselves, that will keep pricing power stable, starting with Blackwell.on Hopper GPUs in anticipation of Blackwell GPUs. The market may interpret this as weakness, but this is actually a sign of immense strength. Nvidia needs to pass the baton from the H100s and H200s to the Blackwell architecture for the stock price to extend. We are less concerned with what happens in the immediate-term, and in fact, thea few times that Nvidia is a buy on dips, implying the stock won’t go up forever. Instead, we are encouraged to see early signs of a careful transition to the next architecture to help inform our next buy.There is nothing quite like rapid earnings revisions intra-quarter to determine the quality of a position. For example, consider that Nvidia sold off directly after the November report, yet has gone up a rapid 91% since. The earnings revisions are why Nvidia is so strong intra-quarter: This upcoming quarter is expected to report growth of 242%. Last August, the growth for the April quarter was expected to be 91.6%. Only three months ago, the estimates for the April quarter were for growth of 197.5%. Stated in terms of revenue, this quarter’s revisions have doubled from $13.8 billion in August to $24.5 billion. Next quarter, the company is expected to report growth of 98.7%. This was expected to be growth of 44.6% last November. Stated in terms of revenue, next quarter’s revenue has gone up $7 billion from $19.5 billion in November to $26.7 billion in May. In the past three months alone, the estimates went up $4 billion. Below, we discuss why margins, cash flow and strong earnings support our decision to buy on dips. However, equally as important, there is also a decent probability that FY2026 and FY2027 revenue estimates are too low. The most bullish analyst from KeyBanc is calling for adata center segment by 2025. HSBC believes Nvidia’s FY26 revenue could be as high as $196 billion, which implies about a $192 billion data center segment. Loop Capital foresees a $150 billion data center segment as soon as this year, while Wells Fargo has estimates for a $150 billion data center segment by 2027. The exact timingLet’s breakdown the weight of those comments with some back-of-the-napkin math, which shows that analysts are currently estimating about $122.4B in data center revenue for FY2026 . This is about 65% lower than the more bullish analyst estimates of $200 billion in data center revenue.will end in $50B quarterly revenue. The difference between the current consensus and this much higher trajectory can be summarized in one word: Blackwell.These are the current estimates, yet if the analysts are correct, then the far right of the graph will end in $50B quarterly revenue. The difference between the current consensus and this much higher trajectory can be summarized in one word: Blackwell. There are additional data points in the supply chain and on the demand side that support Blackwell seeing an increase in orders over Hopper. For example, Taiwan Semi’s CoWos, which is essential for Blackwell’s architecture, is estimated to rise to 40,000 units/month by the end of 2024, which is more than a 150% YoY increase from ~15,000 units/month at the end of 2023. Applied Materials has boosted its forecast for HBM packaging revenue from a prior view for 4X growth to 6X growth this year. AccordingNote: It’s important to remember this is not earnings call on what will happen tomorrow evening as the revenue will be reported when it ships to the customer. However, it helps to consider there are directionally bullish data points should the market sell off following the report and provide us a lower entry. Notably, the premiere component for the H200 and Blackwell is HBM3e memory, which is currently supply constrained. Samsung and SK Hynix are bothCEOs of major companies in AI acceleration are in agreement the total addressable market is much, much larger than today’s market size. Lisa Su of AMD has stated the AI chip market will reach $400B by 2027. Intel’s CEO has stated AI chips will become a $1T opportunity by 2030, which is almost twice the size of the entire chip industry in 2023. Big Tech capex is supporting this growth. Our firm has been especially strong on correlating capex to AI investments for our paid research members, where wein our newsletter that tracked a 35% YoY increase to $200 billion across Big Tech companies. A disproportionate amount of this will go to Nvidia. We’re closely tracking Big Tech’s capex plans for 2024 and how this will flow downstream to AI hardware companies. The I/O Fund had a 45% allocation to AI going into 2023, one of the highest on record. Today, the AI allocation is higher with many lesser-known names. Learn moreA curveball in the report could be higher than expected China revenue due to China-specific GPUs, such as the H20. Similar to Big Tech in the United States, China’s main players are stockpiling GPUs to secure their lead in AI.: “Growth was strong across all regions except for China, where our Data Center revenue declined significantly following the U.S. government export control regulations imposed in October . Although we have not received licenses from the U.S. government to ship restricted products to China, we have started shipping alternatives that don't require a license for the China market . China represented a mid-single-digit percentage of our Data Center revenue in Q4, and we expect it to stay in a similar range in the first quarter.”The product road map is the single most important thing investors should be focused on. A good chunk of the AI accelerator story is understood at this point. What is not understood is how aggressive Nvidia is becoming by speeding up to a one-year release cycle for its next generation of GPUs instead of a two-year release cycle. This means Nvidia is competing with itself by putting Blackwell dangerously close to Hopper’s product cycle. This move is bold, it’s daring, and it’s absolutely necessary.remains on 4nm dies , similar to the Hopper architecture. What is different is that Blackwell has 2 reticle-sized GPU dies. Reticle size refers to the limit in the chip surface that can be exposed by a single mask. The limit is set by the lithography equipment. At one point it was expected Blackwell would be on 3nm dies, yet due to reasons unknown, Nvidia is moving forward with 4nm. Since Nvidia is not able to offer a more advanced process node, the company is instead doubling the silicon. The Blackwell architecture is rumoredThe B100 is a replacement chip, which means customers can remove the H100 and place the B100 in the same rack. The B100 is air-cooled and doubles NVLink speeds from the H100 and H200. The B100 is willa 2.5X training improvement and 5X inference improvement over the H100. This is due to the B200 having 208 billion transistors compared to the H100’s 80 billion transistors. The B200 will also have 20 petaflops of FP4 compared to the H100’s 4 petaflops of FP8 reaching 32 petaflops of FP8 in the DGX H100 systems. The difference is that the smaller bit size allows for an economical way to achieve more speed when giving up a small amount of accuracy doesn’t make a critical difference. This also helps in the face of a slowing Moore’s Law. Following the release of the Hopper H100, Intel released Gaudi2 which supports FP8. About two years back, chip makers Graphcore, AMD and Qualcomm pushed forfor floating point format FP8. However, the recent B200 will have a second-generation transformer engine that supports 4-bit floating point with the goal of doubling the performance and size of models the memory can support while maintaining accuracy. Part of the secret sauce of the H100 is the transformer engine. The A100 lacked support for FP8 compute at default whereas the H100 leveraged a transformer engine to switch between FP8 and FP16, depending on the workload. The second-generation transformer engine in the Blackwell architecture will offer FP4. This is helpful because AI models are moving toward neural nets that lean on the lowest precision and yet still yields an accurate result. In this case, 4 bits double the throughput of 8-bit units, compute faster and more efficiently, and they require less memory and memory bandwidth. The main feature from the Transformer Engine is the ability to choose what precision is needed for each layer in the neural network at each step, transitioning between 4-bits, 8-bits, 16-bits, or 32-bits. The H100 is ablewith two forms of 8-bit numbers with either 5-bits as the exponent or 4-bits as the exponent: E5M2 and E4M3. This is important because the E4M3 may be favored for back propagation while E5M2 may be favored for inferencing. Building on the first-gen transformer engine, the B200’s second-gen transformer engine will support double the compute and model sizes with new 4-bit floating point AI inference capabilities.According to the current product road map, the GB200 will be released before the B200 GPUs. The real fireworks will begin with the GB200 NVL36/NVL72 systems in late 2024 and then continue with the B200 GPUs in early 2025. The GB200 Grace Blackwell chip connects two Blackwell Tensor core GPUs with the Nvidia Grace CPU. The GB200 NVL 72 rack-scale exascale supercomputer, connects 36 Grace CPUs with 72 Blackwell GPUs in a rack-scale design with liquid cooling., the average sales price of NVL36/NVL72 server rack will be $1.8 million and $3 million, respectively. Notably, its expected the GB200 systems will have strong margins due to using an in-house CPU.second-generation transformer engine with FP4/FP6 Tensor core. As stated above, the 4nm process integrates two GPU dies connected with 10 TB/s NVLink with 208 billion transistors. The GB200 will provide 4X faster training performance than the H100 HGX systems and will include a second-generation transformer engine with FP4/FP6 Tensor core. As stated above, the 4nm process integrates two GPU dies connected with 10 TB/s NVLink with 208 billion transistors.. Fifth-generation NVLink enables multi-GPU communication at high speed, reaching 1.8 TB/s bidirectional throughput or 14X the bandwidth of PCIe for a single GPU.Therefore, it’s the compute and the communication capabilities of the upcoming GB200 release that are important to consider.Why GB200s and B200s will Drive more Demand: To scale up a model, AI departments utilize a Mixture of Experts approach. MoE distributes a computational load across “multiple experts” and trains across thousands of GPUs using what is called model and pipeline parallelism. This enables more compute-efficient pretraining yet the parameters still need to be loaded in RAM, so the memory requirements remain high. For inference, GB200 will deliver “a 30X speedup” for 1 trillion+ parameter models by leveraging FP4 precision and fifth-generation NVLink. This is what that the leap in real-time throughput for inference looks like for a 1.8 trillion parameter model:precision and fifth-generation NVLink. This is what that the leap in real-time throughput for inference looks like for a 1.8 trillion parameter model.Blackwell is for the trillion+ parameter era of generative AI. The architecture is designed to support the largest language models today and is future-proofed with the GB200 NVL72 rack-scale solution, which is an exascale computer that contains up to 5,000 NVLink cables that total 2 miles. You also have to consider that AMD was coming to market in the first release with nearly 2X memory as the H100. Nvidia is remaining competitive with HBM3e and soon HBM4 to help models run in memory. The GB200 also has a new decompression engine that allows GPUs to process and decompress compressed data sets to speed up database queries. Coupled with 8 TB/s of high memory bandwidth and high speed NVLink, the GB200 systems deliver up to 18X faster database queries. In addition to this, there is up to 13X faster physics-based simulations compared to CPUs and 22X faster simulations for computational fluid dynamics .High bandwidth memory offers higher bandwidth, capacity, performance, and lower power by vertically stacking up to twelve DRAM memory chips to shorten how far data has to travel, while also allowing for smaller form factors. Stacked memory chips are connected through something called “through silicon vias” or TSVs. HBM is increasingly being used to power machine learning, high performance data centers, and more recently, generative AI models. CoWoS architecture refers to 3D stacking of memory and processor modules layer by layer to create chiplets. The architecture leverages through-silicon vias and micro-bumps for shorter interconnect length and reduced power consumption compared to 2D packaging. The advanced CoWoS packaging that is needed to combine logic system-on-chip with high bandwidth will take longer, and thus, it’s expected that Blackwell will be able to fully ship by Q4 this year or Q1 next year. How management guides for this will be up to them, but commentary should be fairly informative by Q3 time frame. GPUs will move from 8Hi configurations to 12Hi HBM3e configurations by 2025. These upgrades are needed to train and deploy large models with trillions of parameters in the near future. What Nvidia’s product road map intends to accomplish is a way forward for real-time inference that is computationally efficient, cost-effective and energy efficient. The recent surge in generative AI and AI GPUs, spurred by the success of OpenAI’s ChatGPT and development of hundreds of other large language models, are forecast to bring about a new DRAM market, underpinned by high-bandwidth memory and DDR5 HBM3 and HBM3e are becoming the next battleground for memory chip manufacturers as well as AI chip design companies, especially Nvidia and AMD, who are pushing the boundaries with the amount of memory bandwidth in each GPU. AMD’s competing GPUs, the MI300 series, substantially boosted memory and bandwidth relative to the H100, utilizing Samsung’s HBM3. The MI300A is shipping with 128GB HBM3 memory while the MI300x ships with 192GB memory and 5.2 TB/s of bandwidth – that’s 1.6x more bandwidth and 2.4x more HBM3 density than Nvidia’s H100. Nvidia is rapidly moving forward with its GPU roadmap, as it aims to launch its next-gen H200 and B100 GPUs next year followed by the X100 GPU in 2025 – each GPU will accelerate AI inference times along an exponential curve, thus creating a need for more memory and more bandwidthNow that we’ve touched base on the importance of Blackwell, let’s get prepped for this evening. Here is what analysts are expecting:For Q1, Nvidia is expected to report revenue of $24.6 billion, for growth of 242%. Management guided for revenue of $24 billion +/- 2%, for a growth rate of 233.7%, at the midpoint.On a fiscal year basis, the company is expected to report revenue of $113.2 billion for growth of 85.8%. These estimates have doubled since August. The FY2026 growth rate of 26.1% for revenue of $142.8 billion, and then FY2027 growth rate of 17.7% for revenue of $168 billion, is where estimates are too low if there is a $200 billion data center segment in the medium-term.For Q1, Nvidia is expected to report adjusted EPS of $5.58 for growth of 411.9%.For FY2025, adjusted EPS is expected to be $25.4 for growth of 96%. FY2026 adjusted EPS is expected to be $32.2 for growth of 26.6%.As the story for Nvidia unfolds over the next few years, keep an eye on margins as software will begin to positively impact the company with higher margins. The company is expected to end the year with $2 billion in software revenue. In the near-term, and especially for this earnings report, it’s likely that analysts ask about the costs associated with HBM3e as memory components are increasing in costs. TrendForce has reported that HBM3 prices havesince 2023. HBM3e prices will be even higher than HBM3. Analysts may also ask about the yield issues that major memory suppliers SK Hynix, Micron, and Samsung are reported to be facing, given the complexities in the manufacturing process for HBM3e and its longer production cycle.Management guided for gross margin of 76.3% for gross profit of $18.3 billion. If reported in line, this will represent flat growth QoQ and 1170 bps expansion from 64.6% in the year ago quarter. Management guide for adjusted gross margin is 77%. If reported, it will represent 30 bps QoQ expansion and 1020 bps expansion YoY. Operating margin was guided to be 61.7% for operating profit of $14.8 billion. If reported, this will be flat QoQ yet up a whopping 32-points from 29.76%. This is the most rapid operating margin expansion that I have personally witnessed. It is rare, even with a hyper growth company to report a 32-point expansion on this line item.Net margin guide is 52.1%. If reported, it will be down sequentially. However, a remarkable 23.7% expansion on a YoY basis.Last quarter, Nvidia reported operating cash flow of $11.5 billion for a margin of 52%. The free cash flow of $11.2 billion represents a margin of 50.7%. The fiscal year free cash flow of $26.9 billion was more than 7 times higher than the fiscal year 2023 free cash flow of $3.75 billion.The data center segment reported revenue of $18.4 billion for growth of 409% YoY and was up 29% QoQ. Nvidia’s tough comps kick in with the Q2 July quarter when the company reported DC revenue of $10.3 billion for growth of 171%, and thus the guide is key. Management will not guide to DC specifically but it’ll be easy enough for analysts to read through the lines that any beat/raise on Q2 is likely coming from the DC segment.“Fourth quarter data center growth was driven by both training and inference of generative AI and large language models across a broad set of industries, use cases and regions. The versatility and leading performance of our data center platform enables a high return on investment for many use cases, including AI training and inference, data processing and a broad range of CUDA accelerated workloads. We estimate in the past year approximately 40% of data center revenue was for AI inference.” Gaming revenue of $2.8 billion was up 56% YoY and was flat QoQ. Nvidia has fared better than gaming peers due to the timing of the RTX 4000 Series,Professional Visualization reported revenue of $463 million for growth of 105% YoY and 11% QoQ.with Charles Payne today, the upcoming earnings report is only one piece to the story, whereas the ultimate fireworks will be when the Blackwell architecture begins to ship Q3-Q4.We will see peak growth this quarter – even if we get that beat that Nvidia is becoming known for, H2 will certainly see a slowdown. This is normally a great jumping off point for investors but those who stick with Nvidia will be rewarded for a few reasons: This is an organic growth company, which is very rare in tech where most growth is bought. That means Nvidia is likely to remain strong on margins and EPS, even in the face of slowing revenue growth. The supply chain is providing hints that analyst estimates for the data center are too low – there could be up to 65% upside on those estimates in the next 6-7 quarters. The reason I side with Keybanc, Loop and others in thinking the estimates are too low – and this last point is critical – is because Nvidia is speeding up its product road map and introducing the Blackwell architecture to address the trillion+ parameter models that Big Tech will compete to create and train. Nvidia has sold off 10% or greater about 9 times since the 2022 low. We see any dips as buying opportunities as we brace for Blackwell toward the end of this year. The I/O Fund we had five positions with returns over 100% and seven positions beat the Nasdaq in 2023. This contributed toof 131% since May of 2020. For more in-depth research from Beth, including 15-page+ deep dives on the stock positions that the I/O Fund owns, take advantage of our biggest sale of the year in honor of our four-year anniversary and subscribe Please note: The I/O Fund conducts research and draws conclusions for the company’s portfolio. We then share that information with our readers and offer real-time trade notifications. This is not a guarantee of a stock’s performance and it is not financial advice. Please consult your personal financial advisor before buying any stock in the companies mentioned in this analysis. Beth Kindig and the I/O Fund own shares in NVDA at the time of writing and may own stocks pictured in the charts.Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.Insults, profanity, incoherent, obscene or inflammatory language or threats of any kindContinuous attempts to re-post comments that have been previously moderated/rejectedAttempts or tactics that put the site security at riskProtect your community.
Q1 Earnings Preview Nvidia Blackwell Nvidia Stock Buy Nvidia Stock Nvidia Earnings Nvidia Stock Research Tech Stocks Stock Analysis Nvda
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Tuesday's analyst calls: Nvidia earnings preview, gym stock gets an upgradeNvidia and a major gym stock were among the biggest names talked about by analysts on Tuesday.
Read more »
Analysts expect another strong earnings report from Nvidia, lift stock targetsAnalysts expect another strong earnings report from Nvidia, lift stock targets
Read more »
Nvidia Stock Earnings Preview: NVDA advances ahead of Wednesday’s Q1 resultsNvidia will release fiscal Q1 earnings after the close on Wednesday.
Read more »
A chip stock hedge if the AI trend cools off a bit following Nvidia's earningsJeff Kilburg believes the AI theme could be due for a breather.
Read more »
Stock Market Today: S&P 500, Nasdaq in record close ahead of Nvidia earningsStock Market Today: S&P 500, Nasdaq in record close ahead of Nvidia earnings
Read more »
NVIDIA earnings preview: Stock price forecast after May 2024 earningsNVIDIA earnings preview: Stock price forecast after May 2024 earnings
Read more »
