Semi Doped

NVIDIA's Marvell Strategy, Is Memory Different This Time?, Intel's Ireland Fab

Vikram Sekar and Austin Lyons

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 42:01

In this episode, Austin and Vik analyze NVIDIA's $2 billion investment in Marvell NVLink Fusion, exploring its implications for AI infrastructure, interconnect protocols, and the broader chip ecosystem. They also discuss the current memory market surge, DRAM pricing, and Intel's strategic fab buyback, providing deep insights into industry trends and future directions.

On Substack
Vik: https://www.viksnewsletter.com/
Austin: https://www.chipstrat.com/

Chapters

00:00 NVIDIA's $2 Billion Investment in Marvell
20:11 The Memory Market Crisis
20:16 The Future of Memory Pricing and Consumer Impact
22:55 The Cycle of Supply and Demand in Memory
27:23 AI's Impact on Memory Demand
31:46 Long-Term Agreements and Market Stability
35:07 Intel's Strategic Fab Buyback
40:44 Monopoly Analogy: Intel's Market Strategy

SPEAKER_01

So memory memory has gone crazy. That's all. That's all I can say. Next topic.

unknown

Next.

SPEAKER_01

Wait, it feels like you've been saying this for the last several weeks. No, it's gone crazier. I'll tell you why.

SPEAKER_00

Hello, listeners, and welcome to another semi dope podcast. I'm Austin Lines of Chipstrat, and with me is Vic Shaker from Vic's Newsletter. All right, Dick. So I thought let's start today with the NVIDIA Marvel ND-Link Fusion$2 billion investment topic. You know, I felt like this one didn't get much coverage in the press. Like not many people talked about it. I felt like it's just like, oh, another$2 billion NVIDIA investment next.

SPEAKER_01

Yeah, it's the NVIDIA$2 billion cookie. Everybody gets a cookie. Exactly. Momentum gets a cookie, GoPro wins a cookie.

SPEAKER_00

You get$2 billion and you get$2 billion and you get$2 billion. Yeah, totally. So let me read a little bit from the press release and then we'll try to unpack what this means and what are some implications. So the press release stated from NVIDIA this partnership builds on NVIDIA NVLink Fusion, a rack scale platform that enables customers to develop semi-custom AI infrastructure using the NVIDIA NVLink ecosystem. And then went on to say Marvell will provide custom XPUs and NVLink Fusion compatible scale-up networking, while NVIDIA will provide the supporting technologies including Vera CPU, ConnectX Nix, Bluefield DPUs, NVLink Interconnect, and Spectrum X switches, and the Rag Scale AI compute. So kind of a lot there. Let me hand it over to you. What is your reaction in reading this and maybe what are some insights that you have?

SPEAKER_01

So the first thing that came to mind when I heard this was why is NVIDIA investing in its competitor? Basically, I thought, okay, custom ASICs are here to eat NVIDIA's lunch and they are bad for NVIDIA. So why is Nvidia giving a$2 billion cookie to Marvell so that they can do better in uh making custom ASICs, which would hypothetically displace GPUs, right? I mean, some sounds like a weird deal to me. That was my first reaction. But later, I think I uh I thought about it and it was like, okay, it's not quite as simple as that. So we'll talk about it.

SPEAKER_00

Yes, yes. Which I will say that the$2 billion piece is also interesting too, which is like why? Obviously, it's like, let's partner together and I'm incentivizing you. But it's kind of interesting because it's like lots of companies can partner jointly on things and have skin in the game without a two billion dollar investment. But um tell me, okay, so tell me how did you get past the uh wait a minute, NVIDIA, you're investing in a competitor, XPUs.

SPEAKER_01

Yeah, so I read through the whole announcement too, and this was the XPU was the one aspect that stood out immediately. But then if you look a little bit closer, the the press release also says the companies will also collaborate on silicon photonics technology, kind of broad. And if you like go down, you'll see that this is something to do with uh uh advanced optical interconnect solutions and silicon photonics technology. So that is very interesting to me because you know, apart from this whole custom ASIC discussion, which is like obviously the top of mind thing, you have to remember that recently uh Marvell bought Celestial AI, which uh is great for their like photonic interconnect uh between dies. So they called it their photonic fabric technology, which is think about it like you know how like Intel has EMID, which is like a little substrate you just put inside that connects two chiplets together with like metal interconnects, and it could connect like any part of uh a chiplet one to any part of chiplet two. And so this is like the same thing, but in optics, like there are these are light links between chips. So this is very cool technology. And Celestial got bought by Marvel uh for this uh for some like some time back, and my thought was okay, if you're gonna work together on silicon photonics, are you also going to work on you know integrating your HBM and GPU with like optical links at some point? Like, is this the kind of scale up? Then the other question I had was like, is NV Link in the future going to be optical? That's really interesting. Yeah.

SPEAKER_00

Could it be, right? It's just a protocol.

SPEAKER_01

Yeah, it is. Right now, the physical layer NV Link happens is on copper, and you know, they're just like metal wires that you know connect everything up, and there are these 30 circuits at either side that like send the bits uh bit you know over this physical medium. There's no reason the physical medium cannot be light. NV Link in theory can work over optics.

SPEAKER_00

Sure. Fascinating. Okay, I'm gonna, yeah, that's that's really interesting. So I I have another angle and I think I could tie that that together too with it. So when I started reading this and thinking about it, NV Link Fusion. So of course, so the first question is going back to custom XPUs, like Nvidia is basically saying, you know, GTC recently, we saw them say, hey, now we're gonna have like this end-to-end solution that actually has kind of like multi-vendor silicon in some respects, because it's NVIDIA's Vera Rubin, but it's also the Grok now NVIDIA branded GROK LPU. And um, so they're they're sort of saying, like, if we disaggregate inference, depending on the outcomes that you're trying to achieve, you could have a system that can get you very low latency using some of the GROK stuff for decode, but also like higher throughput with the with the NVIDIA stuff. And naturally my head went into like, oh, well, it kind of feels like they're trying to say to uh Marvell's partners, like, you should put your XPU racks in the same data center with our NVIDIA uh GPUs and maybe our LPUs. And if you have, if you use NVLink to connect it all, it can all talk together. So it's it felt like a step in the direction of even more heterogeneous. Is that how you say the word? I don't even know, more multi-vendor silicon, yeah, bringing sort of saying, like, okay, fine, we admit like XPUs are gonna be part of the solution, but you should talk using NVLink and like fit nicely into us. And then I was trying to think more about like, well, who would want to do that and and what what is the benefit versus just having your like, oh, we already have a data center full of training, for example, and a data center full of NVL or GB300s or whatever. And and I say training because Amazon had made an announcement. Well, first of all, Amazon Trainum, they work with Marvell as a back-end partner. So there's already a relationship between AWS and Marvell. And then on top of it, I found from December 2nd of 2025, and I don't, I like barely remember this because so much has happened since then, but there was an announcement around NVLink Fusion that Amazon, I think training four was going to be open, like was, I'm not sure I should go back to exactly how they phrased it, but basically that they're going to use NVLink, but they also talked about using UA Link too. So tying this all together, when I thought about this investment, the more I thought about it, I was like, oh, this kind of feels like the customer, AWS Trainum, already said they're gonna use NVLink. And Marvell needs to kind of come along and make sure that their XPUs can work with NVLink. And this is maybe NVIDIA bringing it all together and saying, yes, you are gonna make this happen for your big customer, Trainum, who we want to be that's Project Rainier, or however you pronounce it, it's like a million um XPUs already. So clearly, like AWS is deploying things at scale. So I think if you're NVIDIA, you you're thinking like, hey, we can continue to sell AWS GPUs, and can we play in that expanding XPU TAM as the interconnect and maybe as the supporting CPU racks, uh supporting KV cash storage clusters, right? So I maybe it's NVIDIA seeing a expanding Pi and saying we we want to play in AWS's Pi, and this is how we're helping do that. NVLink fusion, investment into Marvell, like tighter friendship there.

SPEAKER_01

So what you're saying is like this is more on a need basis because uh AWS is like, look, I want NVLink and I want you to do it, Marvell. And Marvell's like, but I can't do NVLink. And so Marvell goes to Nvidia and is like, hey, I'm gonna make a lot of chips for this, like, and they want NVLink. Can we work together? And Nvidia is like, yeah, sure, why not? I mean, you're gonna give the chips to WS anyway, but we can own the rest of the platform and lock them into that. So it's like a win-win all around, right?

SPEAKER_00

That is my hypothesis exactly. It's win-win-win, win-win all around. And then to take it maybe one step further to what you were saying about photonics and and celestial, if I recall, Celestial, and I I need to go double check on all this, but if I recall, one of their big potential customers is Amazon Trainum. I think that was part of the whole Marvell acquiring Celestial because Celestial had lined up such a customer. And my hunch is that the customer said, we want to bet on you, but we don't bet on startups. So, you know, you should get acquired and then you should be acquired by Marvell. And they're a partner of ours here, and then we will trust that everything will come together. So I also don't wonder if if you're not wrong in that, like, hey, this is NVLink and it's working closer with Marvell into training. And by the way, Trainum's excited about photonic fabrics in the future. And so, yeah, maybe this is a way for NVIDIA to even play there in the future with NVLink over optical die-to-die connections.

SPEAKER_01

Yeah. If you look at the online reaction to this piece of news, it's always like, oh my God, Jensen is a galaxy brain. Look at him, like he's he's not really giving up anything. In fact, he is secretly locking in all the customers into the interconnect fabric and drawing them in because you know there's only this many GPUs or ASICs you can put into a rack. But imagine when we go to like over a thousand coherent GPUs in a rack, or you know, we need basically how much networking? So, you know, he's betting on the substrate instead of the compute, no point in computing with AC, you know, competing with ASICs instead. Like, yeah, I kind of get that, but I like your story better because it's more of a need-based thing. It's more like logical or realistic to me. I don't know if it's actually real, but sounds nice.

SPEAKER_00

Right. Well, thank you. It's my speculation, and I do feel like the the customer pull of Amazon saying, let's make this happen, is an interesting angle.

SPEAKER_01

We'll see if it's true. Can I add in one more benefit I think there is to this whole situation? Yeah, please. So Marvell now has NV Link capability, which is great for them, but they also have UA Link capability, which means like anybody who wants to make these ASICs with Marvell has both choices now. They're not locked in. And Marvell wins either way because you know, what ecosystem do you want to? AMD, do you want to work with UA Link? You want a custom ASIC? Sure, we'll make it for you. Or Google, do you want something? Or like you say, AWS wants NV Link? No problem, we got you. You know, so they can do everything now, which is amazing. Sorry? Go ahead. No, I was gonna say, as opposed to Broadcom, who also makes ASICs, but like they're tied into UA Link now. They don't have NVLink.

SPEAKER_00

Yes, yes, and and Broadcom's definitely pushing ESON. So I do think this is strategically sound from Marvell to say we want to be an open partner so that you don't have to get locked into Broadcom switches. So you don't have to do ESON if you don't want. You can do UA Link, but of course UA Link, like AMD's pushing it. I think Astera Labs is out there. Oh, Marvell, by the way, they had an announcement a while back that they are going to support UA Link and they announced a custom UA Link scale-up switch back in June of 2025. By the way, they bought that XCON company kind of quietly recently, um, and$540 million. And um, they have a uh Structera S uh 260-lane PCIe 6.0 switch. But I think maybe it's they're gonna be able to support NV Link or UA Link with this silicon. So Marvell will say, like you said, you can do NV Link, you can do UA Link, and we've got the silicon to do the switching no matter what protocol you want.

SPEAKER_01

Yeah. So that opens up the door to like a lot of lot of people uh to use Marvel for Marvel for whatever they want. And I like the other thing that you said, which is you can deploy these ASICs as a separate rack right next to GPUs. So one of the fears that I read online was it's not like it's going to eat up what Nvidia GPUs, like they're gonna sell lesser, right? Why would you enable your your competitor to put in ASICs now? You can't sell GPUs. I don't think that's ever going to be the case. You always need GPUs, and if you see the I think the Blackwell Ultra ML perf 6 results that recently came out, it is fantastic. Like the the the those GPUs are fantastic even now. They're performance-wise, they're amazing. So it's not like they're ever gonna go away because you still need to train models, you still need to do stuff, and the inference is opening up all these different workloads that we didn't imagine like in the early days of AI when we were more focused on training models rather than just doing inference now that at scale, like we are starting to do. And you know, with this agentic AI, it's even worse. Like token explosion is like already here or is gonna get even more, right? So, in all of these situations, I think Nvidia wins because GPUs are still going to be used. And now, if you hook it up all with NV Link, you can do this extreme co-design approach that you know and always keep talking about with even ASICs, because why not?

SPEAKER_00

Totally, right? Why not have those workloads running really close to the orchestration CPUs and the GPUs? Like put it all right there. Yeah, co-design it. Totally agree. Very relatedly to this topic of running NV Link and UA Link and ESun, there's a startup out there, upscale AI, and they've got they're building the Switch Skyhammer. And I've talked to them at some of the the shows, I think OCP and recently. And anyway, their pitch is like, hey, we're building silicon that can run either protocol on it. You don't have to, as a company, you don't have to make an NVLink switch and a UA Link switch and have team like put all the costs and design the masks and everything. And you could just have our switch. And there's a little bit of a trade-off, which is like there's a little bit of overhead if we are in the software defining like what protocol you're running, but it's and it maybe take a little bit bigger chip size because we have a little bit of overhead, but it's totally worth it. And therefore, if you buy our switch, it can run UA Link, it can run presumably NV Link, and then it can run Esun. And so I thought, on the one hand, I thought, like, oh, this is exciting for upscale AI because it's sort of validating, like, hey, Marvell's saying we want to serve customers who want any of these different protocols. Of course, it there's questions around route to market for upscale AI. It's like, is Marvell gonna buy you? Is Broadcom gonna buy you? Are you gonna be able to convince customers to buy your switch instead of just partnering with these guys? And then on top of it, bringing it all the way back to NVLink and everything. I was Googling um Upscale a little bit more to refresh on the details, and I saw that uh NVIDIA had poached their, there's an article saying NVIDIA poached upscale AIs, uh, one of their founding members and who was like a chief architect to join NVIDIA's NVILINK Fusion team. It's just like two months ago.

SPEAKER_01

The conspiracy grows.

SPEAKER_00

Right, exactly. So make of that what you will, but I do think there's a lot of interesting things happening in this space, and uh it'll be fascinating to see in a year or two what scale-up protocols people are running and what is the switch silicon supporting it. So tell me this.

SPEAKER_01

What happens if like UA Link and Esun doesn't become the standard and NVLink because it like hooks up to everything anyway? And I remember like some time back Qualcomm uh also like partnered over NV Link so they can like provide their CPUs to hook up with Nvidia GPUs via NVLink. So it's like a CPU supply thing that they signed up sometime back. So if it becomes so that NV Link becomes the dominant factor because they've opened it up via the fusion platform, then what happens to UA Link, ESun, and what happens to upscale AI because now we need only one platform, one protocol, right?

SPEAKER_00

So here's my question to the proposition of could it only ever be NV Link? And I think immediately I think, well, AMD has no incentive to support NVLink because they don't want to. Nvidia is their biggest customer. So they would hate to be selling their own racks and then giving a slice to NVIDIA. Now, you might argue yes, but and so therefore that's why they're supporting UA Link so they can have an open alternative. But what if customer pull is totally all about NVLink? Because customers are saying, yes, we're good, we're going with multi-vendor silicon, but we want to reduce complexity as much as possible. So we want the scale up to be just one standard, so we don't have to mess around with it or worry about it. But then you always get into a place of whenever there's one winner, then they can extract a premium and everyone gets tired of paying the NVIDIA tax, and then someone pops up. So I have a hard time seeing it ever settling on NV Link, but you make a fair point, which is like if all these XPU people are starting to use NV Link so they can talk nicely with NVIDIA, NVIDIA is sort of disaggregating in their walled garden and letting other vendors come in, but it's still sort of like NVIDIA's garden, if you will, and they're kind of in calling the shots and in control. And that's gonna make it hard for AMD to say, put racks of instinct into your big multi-vendor deployment, you know?

SPEAKER_01

Yeah. But you know, if history has taught us anything, especially when it comes to interconnect specifications, there has never been like one solution. Like, why do we still have so many USBs? Right in the history of Interconnect, there has never been like one platform. You remember XKCD comic that says, like, oh, we have a standard specification. No, we have a dozen of them. Why do we need a 13th one? And then now every now there are 13 different standard specifications, uh, something like that. Yeah, there's another one. No, yeah, I remember that.

SPEAKER_00

I remember that for sure. Exactly.

SPEAKER_01

So it's that's what we do. We're always going to have more standards than we know what to do with.

SPEAKER_00

Totally. And at the end of the day, it's never one size fits all. There's as engineers, you're always like, okay, there is something that's created that works a lot for a lot of use cases, but there's always that one use case where it's like, oh, if we strip out all this stuff, we could do this use case way faster. So it does stand to reason that there will be more than one, and that the there are many incentives for there to be more than one. How about that?

SPEAKER_01

True.

SPEAKER_00

All right. Anything else to say on this Marvell topic?

SPEAKER_01

No. Let's see what happens. I'd like to see some actual products or some cool stuff happen with all these announcements. Others we just like speculating and pontificating and making up conspiracy theories.

SPEAKER_00

Yeah, exactly. Right. Come on. Yeah, keep it interesting for us, totally. Otherwise, people just be like, dude, those guys dream up the craziest things, and all this was was about whatever, two billion dollars or something. Yeah. I know. All right. Let's talk, let's talk memory next. So you had an article this week about memory. Set the stage for us.

SPEAKER_01

So memory, memory has gone crazy. That's all. That's all I can say. Next topic.

SPEAKER_00

Next.

SPEAKER_01

Wait, it feels like you've been saying this for the last several weeks. No, it's gone crazier. I'll tell you why. So the DRAM contract prices, right, in Q1 of this year was like 95% higher in just a single quarter. You know, NAND has been up too, like, you know, to a lesser extent, but still, you know, mul you know, very high, some 60% or something. And now there's another trend force projection that comes through that says the contract prices in Q2 or 2026 is going to be like 58 to or 60% more compared to Q1. And it is compounding at a rate that nobody has ever seen since like the 2017, 2018 cycle. And it's just like getting so expensive that uh smartphone makers and PC makers are like, I can't pay for RAM at this price. I just can't. How am I supposed to make a product and sell it to a consumer who is already cash-trapped in an economy that has like an incredible like number of layoffs and wars and you know, you name your difficulties out there nowadays. How can I make low to mid-tier devices when RAM prices are going to eat up like I don't know, 20 to 30% of my bill of materials? It's ridiculous. So, what's happening is that a lot of companies are thinking, okay, the only way we can do this and absorb these costs is to only make premium tier devices. Let's not make low-end and mid-end, mid tier phones or you know, PCs. Like anybody who wants to buy like a sub$100 phone or the sub$500 laptop, forget about it. You're not gonna have any RAM. So that's one option people are thinking about. These companies are like, no, you're not gonna do that. Low end stuff, we'll just go the high end stuff. The other the other thing that usually happens in situations like this and it Has happened in the past is companies will start de-specing low-end devices. Like if a mid-tier device had 8 GB of RAN, the new version of that is only going to have 4 GB of RAN, right? So what happens in this whole process is that the demand from the consumer side starts to go down because they don't want to pay. And because of this, once demand goes down, the supply goes up, right? And then the prices, you know, then it leads to overcapacity because nobody's buying DRAN. And now the prices crash. And so this is what is called the memory hog cycle. You know, it's like hog as in pigs. Because it's like when pork prices go up, all the farmers go and they start raising pigs. But then when the pigs are like ready for like making into pork or whatever, they all hit the market at the same time. And then there is like an oversupply of pork now, and the prices drop, and then and then again everybody stops growing pigs, and now there's like a shortage of pigs, and it starts all over again. So this has happened in memory so many times. In my Substack article, I give like three examples of why it was happening. Yeah, this is why memory is like insane right now, and people like big companies are even cutting down mobile chip shipments because nobody can afford memory. Even the Raspberry Pi has raised prices three times since December, and it's only been three months. Okay. You can't even buy a Raspberry Pi, just saying. Now, the whole question is is this time any different? What usually happens when customers and you know the consumer um market pulls back like this is that stuff crashes. This has always been the peak, and the seasoned memory industry people understand this sign very clearly. They see this consumer demand dropping, and they're like, okay, this is where things are going to turn around, and it usually crashes after that. So the question everybody's asking is like, is this time any different? With the demand of AI, which is the fundamental reason why we don't even have DRAM to begin with, because all the DRAM is going to be made into like HBM or server class HBM, DRAM, DDR memory. There's nothing left for consumers to do, like use, right? So the real question is, is this time any different? Because the AI demand for memory is ridiculous. Like, so what if the consumer demand drops? The memory makers will just sell it all to AI companies. And think about it, you're you're like a micron discontinued, what is it, the crucial line? Yeah. Yeah. And all these big players, SK Heinrich, Samsung, they're all converting their DRAM product lines into HBM because there is like it's a higher margin. I don't say margin anymore. It's a questionable thing to say. Anyway, it's a higher ASP kind of memory. It's harder to make, but it's higher ASP. And the reason I say that it's not a higher margin per se is because the pricing for like standard DDR memory or even server-grade DDR, non-HBM, has gotten so high that companies like Micron are making more out of non-HBM memory than they are with HBM memory.

SPEAKER_00

Right, which is wild. Zooming out, like memory makers have only so many wafer starts per month. If they and then they have to decide what percentage of these wafers, let's talk DRAM, are going to be for HBM and what percent are gonna be for just standard DRAM. And there's the trade-off that it takes like 3x the wave bit of HBM as it does for DRAM. And I think for HBM the four that's increasing maybe even to like 4x. So you're having to make a decision. Well, if I make more and more HBM, it's gonna just eat up a bunch of wafers that could have gone to DRAM. But early on it was like, hey, that's fine. The HBM has really high prices and therefore pretty good margins. Now, when you're talking about like consumers, when I think, okay, wow, there's gonna be less wafers needed to go into phones for DRAM, I'm like, well, that's fine because that wafer can immediately slot. Like there's two different customer classes here. There's like people, like data centers and consumers. And like data centers are just gonna eat up every wafer that used to be a customer wafer. That's not a problem. So, like, look, if we don't sell as many phones, like I don't think that's fine. That wafer just becomes a data center wafer. But the interesting thing that you that you're pointing out is like, okay, fine, but does it become a DRAM wafer or an HBM wafer? And actually, there's some pull to make it DRAM because like, because there's not enough DRAM and then therefore like margins are really good on the DRAM and it's and it's like probably has better yields and is less complex and stuff. So so there's an interesting dimension there to say, like, okay, if you take that wafer from consumer, what should you allocate it to? Should it be HBM or DRAM? Yeah, but as far as this time is different, like my hunch is like, I'm not sure that the consumers don't need as many wafers. Like, I think it feels like those wafers for now can just get slotted right into data center customers and go for servers, for all these CPUs that are needed for agentic AI, um, obviously for all the accelerators that keep getting created. But what do you think? I mean, do you think that the weakness from consumer won't get picked up by all the data center customers and there will be oversupply?

SPEAKER_01

You see, the the consumer market is a giant whale of a market for DRAM. Because think, you know, how many phones and PCs are there? Because the question, that is the real question. What you're asking is the good question. Because the question is when the demand drops, does AI suck up the overflow? Yes. Is it a big enough market that sucks up the overflow? In the past, if you look at the cloud server cycles, it didn't absorb it well enough. Then there was the crypto boom where everybody was like buying like crypto mining equipment and the DDR prices shot up too. The crypto, then when crypto crashed or whatever, and people decided it's too expensive to mine this stuff. You know, it was an oversupply similar situation happened. In the past, it has never absorbed it. Here, it's like a little bit different. I think this time it can absorb it for some time. Of course, many companies are still building out capacity. Okay, so when that actually hits in 2018, I don't know when. Yeah. Yes, yes. You will have the oversupply issue, I'm guessing. Like, because now what's happening is that companies are careful, okay? Like these executives have been burned many times. They are professionals, they know what they're doing. Okay. It's not like guys like you and me are like, you know, pontificating. Yeah, yeah. But they really know what they're doing. So they are actually controlling how much supply is of DRAM is being pushed into the market. Because when you convert a conventional DRAM line into HBM, it is a one-way capex intensive conversion, right? You have a fungible line. Before you could make like LPDDR, which goes into phones, or GDDR, which goes into GPUs, or standard DDR, which goes into your PCs, they're all relatively the same. It could run off the same wafer supply. Here it's not like that because the conversion is like is completely different. Like the HBM product lines are very, very different because they have through silicon viers. So the processing steps for an HBM DRAM is different. And that's why you need three to four X times because the through silicon veers have a keep out space around them. So you just can't like stuff transistors around through silicon viers. So the density of DRAM drops when you build DRAM for HBM. And now you need to stack so many like uh DDR, DRAM chips to make HBM. So all of this means that it just sucks and vacuums up all the supply and goes away there, right? So the conversion process is also expensive because now you have to test HBM memory and make sure the yield is right. All of these problems, it is a commitment that a memory company makes consciously. They're just not gonna do it like willy-nilly. Which is why there is the contract, the contract structure for memory has changed. Now long-term agreements are pretty, you know, three to five years ahead. And companies don't want to miss the long-term agreement. Because in the cloud era, it used to be a quarterly or a yearly deal. So the moment like people backed out, that was the end of the cycle. Now, no, it's a multi-year deal, and people are oversubscribing probably to memory. I don't know if it's going to be an oversubscription. I don't want to say like, oh, look, we have so much demand for HBM and this is it. There could be a case, a scenario where, yes, we are repeating the same mistake we have made in the past, and we are over all these cloud and data center builders are oversubscribing to DRAM yet again, and we will see the same, you know, history is only going to rhyme again. But the long-term agreements are like three to five years. Companies don't want to give up their line in this, you know, in the spot in the line, because that means HBM allocation will go to your competitor. You go to the back of the line, nobody wants to be there. So why not why not overbuy just because you can, right?

SPEAKER_00

So long-term agreements, so you're committing to buy this certain amount at this price for the next three to five years. Are you locking in price or can the price move?

SPEAKER_01

So that is like a give and take, right? So there are price floor clauses, which means memory makers are saying, like, look, this is the minimum price lock that we will do. So there is like structure to that too. Because if you think about it, given how the demand is rising, it is in the interest of memory makers to actually sign shorter-term deals. Like if you file quarterly, you can go and renegotiate next quarter for a higher price. So it's actually in their benefit to do that. But no, that uh I from what I read, it seems like the longer-term agreements are in place so that supply and capacity planning can be better. There's better visibility when you sign multi-year agreements.

SPEAKER_00

Yes, yes, that's exactly what I was thinking, which is you're committing to a volume, and therefore it feels like there's gonna, it would, it feels like it would be hard for uh demand, even if demand kind of collapses on the consumer side, and maybe it the data center doesn't come in and suck it up, like from a supply perspective, there should be very good visibility for the next five years. And obviously they're bringing more capacity online, but maybe I wonder if that prevents a sudden crash and like, oh, suddenly we have way too much supply and not enough demand. But it's like, no, you, yeah, we have really good visibility into supply, so we should be able to commit our capacity accordingly. Of course, the question is that's like a supply floor, and we don't know. CapEx keeps going up at these huge hyperscalers, and we don't know uh what are their future projections. I'm sure they talk with them, but it's still hard to predict the future. I mean, like you can make a scenario where it's like maybe some of that the consumer weakness in mobile, fine. Is there gonna be consumer weakness in PC? I don't know. And then what about this new form factor, like the agent computer that's gonna start sitting on some people's desks to run open claw all the time or to run agents all the time? Like very speculative, but that could be a new form factor that could also suck up some of that consumer weakness. It's still kind of a kind of a consumer form factor. I picture it in the enterprise, you know.

SPEAKER_01

But so PC is also looking down, like all the estimates I saw are like it's gonna be like 12 to 13 percent lower shipping, which is a significant amount, actually. So PCs are actually affected as well. And interestingly, you know, mobile phones use LPDDR, low power DDR. And if you look at like the Vera CPUs, they use low power DDR too. Think about that. Like Nvidia wants to sell so many CPUs now, all of them use low power DDR that mobile phones use too. So where is the supply going to come from?

SPEAKER_00

Totally. There's like a new competitor, a new buyer in town for LPDR.

SPEAKER_01

Well, and I saw this like interesting piece of news on X and also on a website after. I am not sure if this is true. Don't quote me on it. Might be a total rumor, but I'm gonna say it because it's at least entertainment value. The rumor is that Apple is like overpaying for DRAM right now and buying up, vacuuming up all the DRAM supply so that the competitors can't get to it. So they can like, even though it's expensive to buy it, they already have a premium tier product and they will use it on that, sell it at a high price, and lock everybody else out of DRAM supply.

SPEAKER_00

Fascinating. Who knows if that's true? But it's a very interesting chess move and it makes a lot of sense that they would be incentivized to do that.

SPEAKER_01

Yeah.

SPEAKER_00

Essentially, Apple could help. Right now, we're already hearing that the low-end smartphone market is basically going to be frozen for the next couple years, dwindle. And so they could, if they wanted to, as the premium flagship competitor, they could help continue to freeze that out by buying up all the DRAM and being willing to pay the premium to freeze their competitors.

SPEAKER_01

Who knows? It's a hyper aggressive move, but yeah, this business is like that.

SPEAKER_00

Right? Okay, so let's move on from memory. We have just a few minutes left. I think the last topic Intel buys back Ireland Fab from Apollo. So the news is that Intel is paying 14.2 billion for the 49% stake that Apollo bought just back in 2024. Um, the funding is cash plus like six and a half billion of new debt. What was your reaction when you saw this news?

SPEAKER_01

Uh, I mean, it's not mine, but everybody believes this is great for Intel and that Intel's fab strategy and you know the their uh Intel A, everything is looking good, and this is great news. So everybody's like hyper bullish on this.

SPEAKER_00

It definitely does feel like a confident signal from Intel that, like, hey, two years ago, we're bleeding cash. It's not clear how the future is gonna work out. We have to get very creative on how to finance these build-outs. We're even willing to um sell just under half of our crown jewel fab um to private equity, which anytime you're dealing with private equity, those are always like kind of more ruthless partners, I should say. And so there's got to be something in it for them. It has to be very favorable for them, right? So it kind of feels a little bit like a back against the wall move. Now, just fast forward two years, it does seem to signal a lot of confidence. Like, hey, we feel confident in the signals that we're seeing from our foundry business and from our customers. Our stock price has appreciated. It feels like now is the right time to buy back this fab. And by the way, it's I believe this was in Ireland, right? And it's Intel 4 and 3. There is a huge demand for server CPUs. And a lot of that had actually been on Intel 10.7, but it would make sense that even more 4.3 capacity will continue to be needed. And so, like, I think conceptually, like just making sure that Intel owns that fab, has full control over it, which they were already directing control over it anyway, but it just feels like a strong signal in that like we actually think there's gonna be continued demand for 4.3. But my question for you is like, does this signal that they feel very confident about 18 AP and beyond? Or is it unrelated? Because I think a lot of people right away are like, oh, this must mean that Intel feels very confident in 18 AP. And on the one hand, you can say, well, this is an Intel 4.3 fab and they're just buying it back and getting rid of some expensive debt. But on the other hand, to me, it does signal that, like, oh, they're confident in the foundry business, which means they're confident in 18 AP and potentially Intel 14A. What's your reaction?

SPEAKER_01

Yeah, I'm not gonna go so far. I'm going to err on the side of caution here and be like, say I don't know when I really don't know. Because I don't want to draw and extrapolate a line to say that 18A yields are good and that's why they did this and all of that stuff. Because I was thinking about why would Intel do this now? Like, why, why? Because you look at it this way, like two years ago, they sold it uh for 11. what is it, 2 billion, right? To a this uh private equity firm called Apollo. Before that, like one year before that, that's when they opened this fab, the Fab 34 in Ireland was like there was like an$18.4 billion investment, which is also, by the way, Intel's only EU V fab in Europe. So it's strategically important too. But then like a year later, under Gelsinger's smart capital strategy, they sold it for$11.2 billion, then they rebuy it back in two years for$14.2 billion. Like you say, the private equity firm has to have something in this. And what they have here is$3 billion in profit in two years. I mean, that's a fair time.

SPEAKER_00

Yeah, right. Someone's going out for dinner and fancy steak and champagne to celebrate that. Right, right.

SPEAKER_01

So in the uh you know, downtimes of like Intel and 2024, uh, this was a good way, the smart capital strategy to get some money by keep selling 49% of the stake uh of this fab to Apollo and get it back. But then uh the question is why now? Like, and it's like you said in the press release too, it's not like an all-cash deal either. So they have they put some cash down and then they have like six and a half billion dollars in debt. So the question is, what if they didn't do it now? Apollo owns 49% of it, right? So every core ultra and Xeon chip that is out of Fab 34, which I think these are like at least the server-grade chips and uh the one, the CPUs coming in Intel 3 and Intel 4, Apollo will take 49% of the profit. So unless Intel believes that they don't want to give 49% of the profits and that it is better to pay today the the$3 billion extra to buy it back and spend the$14.2 billion so that they can have 100% of all the benefits of all the output of this Fab going forward. Then why would they spend the money?

SPEAKER_00

Yeah, that's a really good framing. Like, oh, hey, we're just gonna give that$3 billion now because we want those profits, which means they think that they're gonna sell a lot of these chips and they want the upside of the 49% across tens of millions of chips to them is better than that$3 billion now.

SPEAKER_01

Also, you don't want to be like paying another extra$3 billion if you delay this by three years.

SPEAKER_00

Correct, because the value surely, yeah, will increase. You know what it it reminds me of? And when you play Monopoly and you could mortgage your properties, like in the good times, you're like out buying all these properties and then times get bad, and then you're like, ah crap, I need to mortgage this thing. So you flip the card over and you mortgage it, and then but you can't make any money if someone lands on it. You don't get that money, you know, and they're like, no, no, we want to flip this card back over. Like, we're gonna spend now to flip it over because we think a lot of people are gonna land on this in the next couple of years.

SPEAKER_01

Like, yeah, you know, somebody who's playing with your like your kids or something like, oh my God, he's come twice to this property exactly mortgage. I've got to unmortgage this now because you know, he's gonna land on this three more times and I'll get the money back that I spent in unmortgaging it, even if I had to pay 10% more. I think that's a great analogy. I love it.

SPEAKER_00

Yeah, totally. It's like, oh man, meta and open AI and Google and whoever keep landing on this, and we need to flip it back over.

SPEAKER_01

But yeah, I hope you know they land on it though. If if they don't land on it, it's like, oh, now I'm short of capital again.

SPEAKER_00

Yeah, right. Which always happens to me in Monopoly. My luck runs out.

SPEAKER_01

I've lost Monopoly for the last 12 years. Every game. It was news. I think I won once, and it was like news. Like all my kids and my wife, we were all celebrating that I actually won something. Like, how am I so bad at this game? It's a good thing I'm not in charge of Intel.

SPEAKER_00

There you go. It's uh a strategy, but it's also luck. But maybe your strategy is no good.

SPEAKER_01

Yeah, it's user error, man. It's not luck.

SPEAKER_00

There you go. Well, hey, that's it for today. Thanks everyone for listening. We hope you enjoyed this. If you're enjoying Semi-Doped, we'd love if you give us a five star rating and a quick review. Comments are great too. On YouTube videos, reviews for Apple Podcasts, Spotify, wherever you listen to this, and share it with your friends. Word of mouth is awesome. And we hear from other people that they had friends share it to them. And so that is really cool. And that's a nice signal for us that you're enjoying what we're doing. So thank you so much. Check out our newsletters, and we'll see you next time.