Semi Doped

Meta's Inference Accelerator & Applied Optoelectronics (AAOI)

Vikram Sekar and Austin Lyons Season 1 Episode 13

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:01:56

Austin recaps moderating an agentic AI panel at Synopsys Converge, then gives an in-depth technical breakdown of Meta's MTIA custom silicon. Why they're building it, how chiplets let them ship a new chip every 6 months, and how the roadmap is shifting toward gen AI inference. Vik digs into Applied Optoelectronics (AAOI), the vertically integrated Texas laser shop whose stock went from $1.48 to $100+, and whether history is about to rhyme.                     

Austin Lyons: https://www.chipstrat.com
Vik Sekar: https://www.viksnewsletter.com/
                                                                                                                                  
Topics covered:
• Agentic AI in chip design — how it changes roles for junior and senior engineers
• Optical circuit switching and what it means for Arista's business model
• Meta's ad-serving pipeline: Andromeda, Lattice, and the GEM foundation model
• Why custom silicon (MTIA) makes sense at Meta's scale
• MTIA chiplet strategy — 4 generations in 2 years
• AAOI's vertical integration, Amazon's $4B warrant deal, and the 2017 parallel

Chapters:
0:00 Intro
1:26 Synopsys Converge — Agentic AI Panel
9:44 Vik's Article: Optical Circuit Switching & Arista
14:43 Meta MTIA — A New Chip Every 6 Months
21:32 Why Custom Silicon Makes Sense for Meta
27:22 MTIA Chiplet Strategy & Roadmap
33:56 Gen AI Fits Meta's Business Model
36:31 How Meta Ships Chips So Fast
40:30 Applied Optoelectronics (AAOI) Deep Dive
45:02 Amazon's $4B Warrant Deal
48:54 Can AAOI's Lasers Compete with Lumentum?
53:16 AAOI's Aggressive Capacity Buildout
55:35 History Rhymes: AAOI's 2017 Boom & Bust
1:00:55 Wrap-Up

#semiconductors #chips #tech #meta #MTIA #AAOI #optics #inference #AI

SPEAKER_02

You know the stock price that it hit at the bottom after this great crash out that happened from the 440 gig to 100 gig transition. It hit$1.48. It was over$100. It lost almost 99% of all of it.

SPEAKER_00

Hello everyone. Welcome to another semi-doke podcast. I'm Austin Lance of Chipstrat, and with me is Vic Shaker from Vic's Newsletters. Hey Vic. So I just got back from Synopsis' Converge event where I hosted a panel during the executive forum. And I thought it'd be fun to talk about that for just a tiny bit. And then, you know, we could talk a little bit about what you wrote this week. But then, you know, I think we've got some interesting inference accelerator thoughts. And uh maybe, maybe at the end we'll have time to get back into optics and talk applied optoelectronics, A O I.

SPEAKER_02

Yeah, that's the plan for today. We should uh talk about some inference stuff and uh definitely applied optoelectronics because many people have written to us asking, hey, why did you miss this from your optics supply chain list? This is such an interesting company. And yeah, I think we should have covered it actually. But you know what? It's there are so many of them popping up all over now. So we can definitely do more of those kinds of episodes. So those definitely keep those uh feedback coming because we we want to hear like what listeners want to also listen, you know, talk hear us talk about, you know, that way we can you know get good content out to listeners. Uh, but let's talk about your synopsis moderating panel first. So this is amazing. So, how was it moderating a panel? Because now once you've done one, you're gonna do 10 and you're gonna become a professional moderator. So tell me how it went.

SPEAKER_00

Uh no, it was fun, it was good. Uh, you know, I had like a script of questions to ask people, which by the way, on the panel, we were talking about agentic AI in chip design, and then also just agencai AI broadly, which is a perfect topic, easy to talk about because it's so hot right now and it's moving so quickly. Um, I had Carrie Brisky from NVIDIA, Richard Ho, who's the head of hardware at OpenAI. So that was really cool. Assim Datar from Microsoft, and Roger Tibet from Synopsis. So, you know, it was cool to have, as we're talking about agentic AI, someone from NVIDIA from OpenAI, from Microsoft and Synopsys, they're like all trying to be there at the forefront building agentic AI for EVA, chip design, and just and then generally. So they all had really cool experiences. So my job as a moderator, like, you know, I had prepared questions in advance, but then they all said really interesting things. So I was just trying to like follow up with the interesting things they were saying. And yeah, we had a super fun, uh, interesting conversation. And next thing you know, my hour was up. Um, I guess the funny part was I couldn't remember the guy I was supposed to introduce afterward. There's a synopsis that's the synopsis CRO. His name's Mike Elo. I'll never forget it now. Um, but he actually used to be the Siemens EDA CEO before he went to Synopsys, which is pretty cool. But then I got done. I was like, all right, well, uh everyone, next up is uh someone from Synopsis. I can't remember its name. The next guy is up next. Exactly. That's exactly what I said. So so you know, I lesson learned on that one to like take note there. But it but yeah, it was a ton of fun.

SPEAKER_02

Awesome.

SPEAKER_00

So what was the what were the key takeaways?

SPEAKER_02

Like, do you have any like one or two things that stood out to you?

SPEAKER_00

Well, okay, so it was during an executive forum. So we wanted to hit topics like not just like okay, what is agentic AI, but like where are companies in actually deploying this? Are we are we all just talking about it? Or some companies actually like doing agentic AI during chip design, for example? So, where are companies? How are roles changing? Uh, how do we get our company to adopt agentic AI? You know, because as an executive, you have to start to think like, oh, is this gonna be a top-down thing, or should we expect it bottoms up where people are just like, I'm trying this and this is awesome, and then it can spread that way. Um and then also just like, you know, okay, well, if roles are changing, like as executives, how do we prepare for that? Like, what does a junior engineer look like in in two years if if uh an agent can do some like place and route uh and timing closure and the feedback loop and everything? Like, what does that mean for junior engineers? But also, what does it mean for a senior engineer? Like, are they gonna feel like their job is going away or or whatever? So lots of really interesting topics. And I think, you know, maybe one thing that stood out to me was um just the idea that, like, okay, let's talk like senior engineers architects. You know, what they've developed over you know, 20 years isn't necessarily all just like I'm really good at coding or I'm really good at this technical skill, but it's a it's obviously it's like judgment and and taste, and being able to like zoom out and look at a system at the architectural level and just be like, oh, I know why you want to do that, but here's why we shouldn't do it. You know, there'd be dragons. I've done this before and I got burned, right? You know, like I've I have life experience to share here. Um, and and so like, okay, cool. Uh agentic AI, even if it can automate like some of the individual contributor tasks, it doesn't necessarily have built up that judgment today. So then the question is, you know, how can we like lean on people for their judgment and their which which you can make arguments is actually like more fulfilling to not be just like, okay, I'm like right, I thought about the architecture, we had some great debates, but now I'm just gonna go like type on the keyboard, you know. But it's like empowering people. And yet at the same time, so then the question is, well, like, well, what about a junior engineer where they have no judgment yet? You know? Um, but then the cool part is like I think you can actually accelerate the the time to learning and the time to value for junior engineers because it's like like when I graduated, I remember like getting the first job and being kind of disappointed because you know, you'd like I was in grad school and you like build all the things in your classes and you like have full ownership, and then you go slot into some like thousands of persons org, and you're on a really big team and you get like a small little block with just like a little bit of responsibility, and it's like, oh, hang tight and and like do this. And after a while, after many years, you know, when other people get promoted and once a year there's promotions and stuff, like maybe you will get to become an architect and develop judgment. And in the meantime, you just like see everyone else's judgment. And so then it's like essentially like, oh, what if you could actually put more responsibility on junior engineers because it's not you don't have to give them the task of just like write this LTR RTL, you know, and and test it and verify it. But it's but you could like lift them up higher. And this would be the idea of and again, not like, oh, we're just gonna replace people, but it's like actually we're gonna do more. Like now our junior engineers could act like senior engineers. Of course, they're good. Then you have to talk about mentorship and and how do you how do you make sure that they're making the right decisions? But I think there's definitely like a positive and optimistic framing for both people who've been in the industry for 25 years and for people who've been in the industry for you know 25 months.

SPEAKER_02

Yeah, so this is very interesting actually, because I've spoken to other people uh in the industry. Some of them are startups building tools like this. And um I've also spoken to like junior engineers who actually in this market do find it difficult to get a job because I think the companies are indexing towards what could AI do for me? Maybe I don't need to hire so many people in the headcount. So maybe there's some hedging on that front. But essentially, what this AI substrate that is now um taking over is the judgment calls that the senior engineers make could actually be codified in some way and could flow through uh to the junior engineer who uses the same set of guiding instructions that the underlying AI is learning from the senior engineer, and so it elevates the level of the junior engineer to a much higher space. Yes, 100%. So that is a very interesting uh idea and um yeah, so I think there's a lot of potential for things like this to happen, and it's very interesting how things will go in the future.

SPEAKER_00

Yeah, totally, totally agree. I I could talk about this a lot. We'll have to have another episode about it, but yes.

SPEAKER_02

Yes, I think we should. I think it's an important topic because it affects the way the industry works. It's not always about chips and uh where uh you know, then which company is gonna make the CPO thing happen first or whatever. That's like one side of it. But the actual people like who are in the trenches, like you say, you know, doing that one tiny task, you know, those are the unsung heroes of the semiconductor world. And uh we've both been in those positions, uh, like and you know, doing this stuff for a long time so we understand you know how much work it actually is that goes on behind the scenes. So and we we have to talk about what AI means for that role at some point.

SPEAKER_00

100%, 100%. Yes, let's definitely talk about it. We'll we'll keep moving on, but but there's a lot there. And I think you know, we're going undergoing this such a transformative shift in technology, but also in the job role in engineering. When we have a front row seat, like, yeah, we should be out in the forefront thinking about this and talking about it, because it can literally impact um all these people at all these companies who are doing work that could be more meaningful if they adopt the tools. Um, but then it could go so far as to like impact education, like, oh, maybe college education needs to change. Because there's a conversation about like, well, you still need to teach the fundamentals, but how do you also like give them these tools alongside it? So I think there's so much to think about here.

SPEAKER_02

Yeah, totally. So good topics of the future.

SPEAKER_00

Totally, totally. Okay, so quick, how was how was your week? I saw you wrote about uh optical circuit switching and Arista, maybe the impacts it could have on Arista.

SPEAKER_02

Yeah, this was a question that somebody asked me on the substract chat, actually, and is a great way for me to get ideas and connections that people are thinking of that I'm not necessarily thinking of. And kind of my job as a Substack writer is to go and investigate that connection in as much depth as I can and write about it. So the basic premise is that Arista's strength is that, and you wrote about Arista, like really, like that's a good article. I've linked your article in my article because you have a good Arista description of what they do as a business. So for somebody who's not familiar, they should actually read your post before going to mine because I will be contrasting it to what OCS brings to the table. So, anyway, so the point is that Arista basically doesn't make any silicon. They buy silicon from Broadcom, the Tomahawk switches. They just put it in this gigantic box and then they put all the transceivers and stuff. And they put one, uh their magic here lies in putting it, um putting a layer of software on it, which they called uh EOS, which I talk about in the article. And it has a very specific function uh because it handles all the packet-based switching that happens uh when you're dealing with electrical switching, right? Where does this packet go? Oh, I mean, like, oh, how do we deal with congestion problems? And you know, that kind of decisions are being made by software. Arista is really good at that stuff. That is their, you know, their golden goose, basically, that keeps it. So, like you described in your article, this is what comprises of Arista as a blue box solution provider, and people like to pay for that software uh sophistication and the underlying hardware as well, as opposed to the white box approach, which does not have the software sophistication, but it has the hardware. And you can write your own software, which some companies do, but not all. If you want a turnkey solution, go to Arista, get a blue box solution for your networking. You can put this on the rack uh at the spine level. So there's the leaf, and then the higher level above that is the spine. Spine is where you have so many connections coming in. It's like a high port count switch, very complex, a lot of traffic. That is where Arista specializes. Thing is, when you go to optics and you go to co-packaged optics, and let's, I don't know, when we go, all this happens 28, 29, 20, 2030. So whenever that happens, uh the entire rack is gone optical. But the switch is still electrical. The Tomahawk is an electrical switch. It has CPO, but you're gonna convert the optics to electrical, and then you're gonna make decisions on packet routing. And then, but so why would you have electrical packet switching when you can do everything in light? So that's what is optical switching. So you don't make any electrical packet-based decisions, you just reflect light and then it does the switching. It's like why not why not go with that? So, I mean, so in the article, I argue why optical circuit switches cannot immediately replace Arista's sophistication yet. So it's not as simple as just putting in an optical circuit switch and calling it a day. So that's what it's all about. So if anybody's interested about this topic, go read that one.

SPEAKER_00

Yeah, nice. Very interesting. And it and it ties into your 40 chess uh hawk tan comment on the last episode about about CPO and about OCS. Because at the end of the day, like like you said, who's inside of Arista's spine switches? It's Tomahawk silicon, right?

SPEAKER_02

Yes, yes. And who else to play up copper more so that the domain stays in copper so that you can do electrical switching and not optical switching, then hard. So that was my four-inch hypothesis.

SPEAKER_00

Yeah, yeah, yeah. But I and I like how you pull on Arista because if you think about Arista, like, okay, so if your business model is we take someone else's silicon and we put it in a box, that's an ODM, really, and that has low margins. But then if you say, but on top of that, we're also adding this software special sauce. Software is high margins. And so, so your margins end up becoming pretty decent still, because you know, you've got like essentially a low margin business, high margin business. Now that's the blue box solution, by the way. If you're white box, where uh you can't capture as much value because you're not putting like sophisticated software on it, then you are just an ODM. Um, and so uh, and now if you're talking about like um, oh, if there's no electrical switching, like what does it mean for the software and these kind of things, like you're kind of pulling apart that business model and going, like, oh, wait a minute, does this feel like a white box? Is the is the software not as valuable? You know, so I thought I when when I saw it, I was like, oh, genius, this is a good good thread to pull on. It's kind of like the next layer down, like the second order impact, you know, of CS.

SPEAKER_02

So yeah, it was it was a good, good, I had fun researching that. But yeah, let's talk about what you wrote. Very important thing that you wrote, which we will talk about more. So this is a good way to talk about your substack article, and then we'll talk about it in some more depth. Meta wants to make a chip every six months. Tell me more about what you wrote.

SPEAKER_00

Yes, okay, Meta and their MTIA chip. And so interestingly, like up until I don't know, maybe like within the last six months, I didn't even know too much about MCIA other than like, yeah, I heard about it once or twice before, but no one ever talked about it. It didn't seem super relevant. And part of as I started to learn about like, well, why does Meta use AI hardware and where are they going with it? You know, obviously a lot of talk recently is about generative AI and how uh Meta's super intelligence team and and Zuck is hiring all these people and paying tons of money and acquiring all the companies and everything. Um, but like zooming out and stepping back, I thought it'd be worth talking about Meta's core business here for a little bit, because at the end of the day, Meta's like business model is advertising. That's how they make their money, right? Of course, we think about them as social media and Instagram and and their family of apps, but at the end of the day, it's an advertising business. And so, in the the key to their business, ever since you know, like 2011 or something, they introduced the out the feed, um, and then eventually an algorithmic feed, which by the way, I remember getting on Facebook in the very early days where there wasn't even a feed, you know, you had to like go to a person and like write on their wall. Do you remember that?

SPEAKER_02

Yeah, yeah. I I I'm old enough to have signed up for Facebook with my like EDU account. Okay, yes, exactly.

SPEAKER_00

Yes, me too. I yeah, totally. Um, yeah, uh we yeah, lots to talk about over a beer sometime back in those days and first getting on on Facebook, but um, yeah, before our parents were on it.

SPEAKER_01

Yeah.

SPEAKER_00

Yeah. But um anyway, so once they introduce the feed and and then quickly like an algorithmic feed, because it's like, oh, how do you decide? Like, I don't want to see everything that everyone posts, I want to see the most important things. And along with that, they're also like, hey, we have to have a business here, let's put ads in here. You get into rankings and recommendations. So, you know, what should show up in your feed, what should show up at the top, and then naturally, like, what are the right ads to show people? And so all of this is driven by machine learning. You know, before it was AI, it was ML, right? And and and so really, uh, if if you let's like let's fast forward just a little bit, like nowadays, um Meta has kind of like three systems that that they like to talk about, and and and it's kind of like three workloads. The first one, let if we talk about like from the advertising perspective, there's a system called Andromeda, and it does retrieval. So the question is like, we have a huge library of ad content. You know, Austin logs in, we could show him anything. We could show him purses, we could show them cameras, we could show them bicycles, like what should we show him? And so there's like this first stage retrieval pass, and Andromeda does this, and it and it says, of all the ads we could possibly show Austin, um, what are like a few thousand candidates, depending on like what platform he's on and what he's doing and who he is and where he lives and stuff like that. And this originally was kind of co-designed with NVIDIA with their Grace Hopper sys platform. Um, but recently Meta said that in one of their earnings calls, I think it's the most recent earnings call, they said it actually now this particular workload, the retrieval, runs across NVIDIA, AMD, and their MTIA chip. So uh we'll come back to that, put a pin in that, but it's interesting to see it start on one hardware platform and already get moved across multi-vendor platform. Um, the next workload is something they call Lattice, the system they've built. And this is like essentially like I think of it as the ranking. So it takes the short list of like, hey, here's a thousand things we could show Austin, which by the way, all of this has to run crazy fast, right? Uh, which it's it's kind of mind-blowing how fast it has to run. Um, but now Lattice, it takes the short list from Andromeda, and then it decides which ads to actually show you. So it's like, okay, great, you filtered this down to like a thousand that makes sense, but what's the best one right now that that's gonna have like the highest impact? And interestingly, they actually have this unified model where they trained, and it used to be a bunch of different models, um, trained on feed, stories, reels, messenger, like they're different apps, but actually they got this one unified model trained on all these signals about what people are doing across all their apps, which is pretty cool, right? Because like I could start on Instagram and see an ad and go out and come back, and I could go to Facebook and be doing something, maybe searching Facebook Marketplace to see if I could just buy that thing used locally, right? And so by going across the whole family of apps, they can get even more insight. Um, but but ultimately they want to um have like uh an optimization function that looks like, well, what's gonna get Austin to click or what's gonna get Austin to convert here? Um, and I don't think I couldn't find what platform they actually run this on, but I'm assuming it's GPUs, and we'll talk about it in a second. Um, but then finally, they've got this gem foundation model um that they've been talking a lot about lately. If you listen to the earnings calls, and this is their recommendation model. Um, and interestingly, um, this is the first one to show scaling laws like LLMs. Um, it used to not, it was not true. LLMs are are are exciting and they're sort of unique, which is if you throw more data and more compute, you can get more intelligence out, essentially. That was the whole story from like 2022 to 2024. Recommendation systems were not like this. It wasn't guaranteed that if you threw more compute at it, you would get better recommendations. But um, Meta has a ton of researchers and they actually figured out uh kind of this new model where actually, if you do throw more compute at it, you can get better rankings. But then the interesting point, and and I'll I'll bring this all home, is that it's like this huge, huge expensive model. It's very big, runs on lots of hardware, trained on GPUs. Um, there's no way they could serve you know 3.5 billion daily active users quickly on it. It would just be too expensive. So they actually have like a teacher-student model, and the the big gem is like the teacher, and then they actually distill it down to all of these uh like little um, you know, tiny models that can run inference blazing fast and do all this very quick very quickly. Like so they've got this pipeline that just runs all the time and it runs really quick. What should we show Austin? What what is he gonna click on? You know? Um, and this is where the the their MTIA chips come in in the first place was they said, we should design custom silicon for. For running these specific workloads, these specific recommendation and ranking uh inference workloads, which have a particular shape. You don't actually need you, you do want HBM because there's uh in this case, it's different than LLMs, where you like need all the weights and you work through things sequentially, but you do have this huge embedding table. And you know, when someone, Austin, walks up and he's from Iowa, like we kind of like look up in this embedding table and figure out like uh characteristics that are related to him. And if you if you logged in and you have different characteristics, it's got to like look across these tables differently. And so you have like this different memory access pattern and um where the chip still needs lots of HBM, but it's not like necessarily bandwidth bound workload. Um and so it's like memory access bound, but also memory capacity bound. So so the MTIA family, as I looked into it, you know, and and if you look at the MTI 300, which is what they have hundreds of thousands of these deployed in production, their their blog said. Um it's 800 watt accelerator, it's optimized for these sort of this particular shape of workload. It's it has very high 200 gigabyte per second scale out bandwidth, very high scale out bandwidth, because they kind of need to like shard the model across all these chips, and you're doing all these lookups. And so you actually need to be able to communicate in a scale-out fashion very quickly. Um, but it's not like a I need insane compute flops um because it's not as like if you think about like the decode stage of LLMs, it's actually not as compute heavy. That these neural networks for this aren't as big as LLMs. So I know there's a lot of context, but I share all this to say that when you look at the type of workloads they run for their core business where they make money and the shape of the MTIA 100, 200 and now 300, it it actually fits the profile of the workload very well. Um so then the question is, you know, what what are they doing with the 400, 450, 500? And how is the the specs of the chip changing and how does that align with the workloads changing? So I'm gonna pause there and I'll let you kind of interject with your high-level thoughts.

SPEAKER_02

Yeah, yeah. That's a good that's a good uh description, but I'm gonna walk through it a little bit more, uh, not forward, I'm gonna walk back a bit. So, what we're talking about is there's a ranking recommendation system uh that has the first um the recommendation system is Andromeda, and then it that gives a bunch of a list of recommendations that goes through the lattice, which is a ranking system, and then it ranks based on the Andromeda's recommendations. So that's what this is, and all of this runs on basically uh not super huge models because they have to serve 3.5 billion people. So these are like somewhat distilled models of the big thing uh that they can run really fast. And for them, um, they need really fast inferencing because the use case is ad serving and recommendation system. That's their major money maker here. So the use case requires low latency by design and definition, and that's why you this is a particular niche uh market where they're not running large research problems, but they're running this particular thing that is small and fast. And so they need a particular kind of hardware solution for that.

SPEAKER_00

Yes, right, yes, exactly. And you this is why custom silicon makes sense because the benefit of GPUs is they're flexible and they can support high precision, low precision, they can support different shaped workloads, and you can reuse GPUs in the future for different things, right? So you may be like, I'm gonna have this big training cluster now, but maybe in five years I'll just reuse all these chips and serve some inference. But this is a company who says, no, we understand our workload very, very well. We're running at it at insane scale. We want it to be as fast as possible, but also as cheap as possible. So we want to make trade-offs. We don't need FP64, for example, or we actually need more bandwidth, or we don't need all of those flops, right? So this is where you can say, like, and even think about this. Think about the like the CUDA lock-in or the CUDA moat that everyone talks about. Like, dude, when you're Meta, you're like, we have workloads that we know really, really, really well. We can write custom software if we want because we know that workload. And it's not like, oh, but there's all these libraries. So in case I want to do computer vision, I can download a library for that, right? Yeah, it's like, no, no, no. It it and so you know, we can get into that topic later when you talk about multi-vendor setup, but for meta, they're they're at such scale that it actually makes sense to make your own hardware and even obviously there's PyTorch and there's the whole stack, but even write custom software if you need, because it's your workload that you know so well.

SPEAKER_02

Yeah, the right-sized workloads, right? And the right size hardware for the right workload, I guess, is you've written about this in an earlier post, I remember. So that you you mentioned like a bunch of numbers here. Uh so I just want to like clarify for somebody who hasn't seen the news before. Um, MTIA 100 and 200 are what are already out and about for like a few years now, I think, um, or at least a year, I'm not sure the timeline. But what they announced recently was the 300, 400, the 450, and the 500. And according to them, the MTIA 300 is already used in ranking and recommendation. It has like uh compute chips, it has one compute chip and two network chips and a bunch of HBM. And also is one interesting thing is yes, this is a low latency application, but all four chips announced by them are all HBM based. So nothing involves SRAM and all of that kind of stuff. Yes, they didn't interesting.

SPEAKER_00

Yes, no, I think it's super interesting too. They didn't disclose anything about the SRAM. So I interpreted that as SRAM is not a key part of the strategy that they're going for throughput here and not necessarily like throughput is the main constraint they're optimizing for, not necessarily latency. But yeah, what do you think?

SPEAKER_02

Yeah, maybe. Uh I don't I'm not sure why uh Meta will not benefit from an ultra fast latency thing like Nvidia's Grog buying Grok. I don't think Meta will uh uh not benefit by going and acquiring somebody who's done an SRAM based chip and just rolling this in because it just addresses a certain section of the inference market that might be useful for them. I don't know. Maybe I think it's interesting. I think Meta may go after one of these startups. I don't know.

SPEAKER_00

Yeah, uh Meta, if you're listening, come talk to us about this. You don't have to tell us what startup you're gonna acquire, but let's talk HBM and SRAM because because to your point, like, okay, like the for example, Mad X recently, they've been talking about like, oh, well, we've we our strategy is HBM and SRAM. Why not wait in SRAM and KV cache in HBM so that you can have both fast and high throughput? Um now, and and and here we're saying, well, hey, there's this custom chip that uh Meta made, and they didn't disclose anything about the SRAM. So like, what are they doing? Um, well, hey, maybe first like and and how could and it it must not be fast, right? Because I thought SRAM means fast. Um, oh well, maybe actually in designing their custom chip, they cut out enough of the general purpose stuff that they didn't need and they've optimized all their software that maybe they actually still get good enough latency using whatever SRAM they have. But again, Meta, come talk to us and enlighten us.

SPEAKER_02

Yeah, yeah. It's not like we know everything that's going on in your amazing engineering teams, but you know, we only guess. But yeah, it's interesting because for two years of chips that they plan to release, which we will talk about how they can release a chip every six months, because it takes like three months to even fab a chip, it's just kind of ridiculous. At least I think that's like a cycle time, is at least two, three months to run through a fab and packaging or whatever. But like they want to release you know four chips in two years, like that's crazy, right? So the highest chip here I can see. I have the notes from their releases. They have four compute chips in a two by two grid array, and um, yeah, they have a lot of HBM, like going up to I think like 400 or 500 GB of uh HBM on the MTIA 500. This is not only um ranking and recommendation, but they're also optimizing for gen AI. So generative AI is as important a play that they have in mind in all these chips as ranking and recommendations.

SPEAKER_00

Yes, yes. So the and I've written about this a little bit too. The the everyone, like from an investment perspective, has over-indexed on the generative AI piece for Meta. And you know, you'll hear stuff like, oh, well, you know, Meta's Gen AI model, uh, Llama, it's not as good as OpenAIs or Anthropics, or oh hey, I think the recent news was like, I heard they might even use Gemini. Like, what the heck? Why did they spend all this money on these people? They're losing.

unknown

Yeah.

SPEAKER_02

Okay, now if you can see, wait, wait, one second. I want to like mention one extra very very important thing here. Like, are we gonna get to uh how they make these so quickly, but there is one even one very important question of who is making them? Yes, who is making all these chips? So this is all like custom ASIC business that goes to Broadcom, isn't it? Yes, it is, exactly. So nothing to Marvell?

SPEAKER_00

No, no chips to Marvell? Well, I don't I don't know. That's a good question. That's always the question. I have to go research. There's always like uh any of these custom basic things, you have to go do a bunch of research because there's all sorts of um noise out in the supply chain about like, well, what about Marvel? Well, what about uh Media Tech? Well, what about you know, all these other sort of back of house companies? Right.

SPEAKER_02

I think my understanding was this is all gonna be like Broadcom custom ASIC business.

SPEAKER_00

But yes, yeah, anyways.

SPEAKER_02

So how is Meta gonna do this? Yeah.

SPEAKER_00

Yeah, to that point, like um the you know, the information come out saying, like, oh, Meta's scrapping their training chip, which these are these chips, the way they frame them, it is inference first. Now they said that they do some R ranking and recommendation training on the 300, and that the um future chips could be used for training, but they were the the design decisions seem to be made for inference first. Um, so Hawk Tan, when that article came out, then Broadcom's earnings was like a few days later, and he had to get on on the earnings call and say, like, Meta's a great customer, they're doing fine, everything's fine, don't freak out here. Um, and then you know, this article comes out and it's like a victory lap for Hawk because it's like, oh, nice, uh, four generations of chips over the next two years, that's a lot of money for Broadcom. Like, yes, yes.

SPEAKER_02

So the custom ASIC business is, as he said in the earnings call, alive and well. Although people are trying to go, what was the term again? Custom, uh customer-owned tooling.

SPEAKER_00

Oh, yes, COT, customer-owned tooling, whatever. Yeah, yeah.

SPEAKER_02

I guess this is not COT. This is still broadcast.

SPEAKER_00

No, correct, correct. Um, so okay, so back to the generative AI thing. You know, if you look at Meta's business, their business today is rankings and recommendations. Now, LLMs can unlock, at the end of the day, if you're an advertising-based business, you want engagement, you want people on your platform, and then you want to put the best ads in front of them. Generative AI can unlock both of those. Um, from an engagement perspective, one of the use cases they talked about in the earnings call was they using LLMs, they can do dubbing um into different languages. So, again, wow, very cool. Now they could take you and I talking in English and dub it into Mandarin or whatever. Like that opens up more engagement and more opportunities for advertising. But then also, of course, you can use generative AI to make better creative content. So, like, yeah, oh, hey, I don't want to spend my time making videos or making graphics. Use generative AI, right? So generative AI fits nicely into their business model as is. It's not just like, oh, Mark Zuckerberg hopes to like have a chat bot that's better than OpenAI's chatbot, right? Um, but that is why uh you can see that their chips are shifting the 400, 450, 500 to be more generative AI beasts, like crazy HBM capacity, um, lots of flops, lots of low precision flops, but not they actually kind of cut back on the scale out network because it this isn't for training LLMs where you need a huge scale out. It's it's for inference. And so they they've increased the scale up domain size from 16 nodes to 72 nodes, yeah. Um, and then they decrease the scale out network because it's just not a priority. So again, you can see they're making inference, LLM inference specific decisions.

SPEAKER_02

So what does that mean? Scale out is not as important anymore.

SPEAKER_00

Um yeah, in inference, if you're talking inference, you don't need a particular workload to scale out to 10,000 chips. You can run an inference workload on 72 chips or maybe 144. Now, of course, there's still some partitioning of the workload. You've got like um all those different parallelisms, expert parallelism. If you've got a mixture of experts, you know, you might say each GPU has its own little expert, and then you still have to have enough communication, which this is all scale up communication these days, to say, okay, this came in, which expert should I route it to? But yeah, scale up for inference, scale up is not, or scale out is not important, scale up is important so long as you have a big enough domain to fit the big model and all the experts. Yeah.

SPEAKER_02

And the reason they can do all of this stuff so quickly is uh they all of it is chipletized, right?

SPEAKER_00

Yes. Oh yeah. So to the question of how can they do four chips in the span of two years or every six months, it's all chiplets and it's so beautiful. You can go look at the diagrams. It's just like like you said, you kind of said it early on, um, like, oh, I'll take two of these and one of these in this generation, and then next generation, I'll take two of these and two of these, right? So it's just like little Lego blocks. And so at the end of the day, that it's beautiful. You just create this IP, and then you can say, for this, okay, we need to get a chip out. So, team, go take a couple of these blocks, a couple of these blocks. But we know for our next generation that we want more compute. So, like, go put more compute dies together.

SPEAKER_02

And that and as generative AI skills, they may try you know make specific more hardware towards that. And maybe ranking and recommendation doesn't need that. So they have this like flexibility to mix and match Lego blocks to match workloads.

SPEAKER_00

Totally, totally. Um, and so maybe a question for you, being uh on in the chip industry as an electrical engineer, um, more recently than me, do you expect? I I looked at this and I thought to myself, oh, I wonder how they did this up amongst their team. If it's like, okay, here's the team working on Gen 1, here's the team that's working on Gen 2, even though it's an arrangement of the same kind of components, there's so much work to be done that you can't just have one person be like, I'm working on four generations of chip at once. Um, or but but on the other hand, it's like, well, if you're using the same core components, like you should be designing it at an architectural level at least, such that this component can be reused across all the generations. So, how how do you think they're handling the teams internally?

SPEAKER_02

So there will be like underlying basic IP that uh scan spans across all products. If you're going to do this many chips in two years, there is simply no way you can re-engineer everything. So a lot of these chips are going to share a significant amount of silicon IP uh right off the bat. And so the only changes would be a small architectural ones, like maybe a few of them have extra die-to-dy interfaces. Because I'm looking at this MTIA 500, and so it's because it has a two by two grid of chips, you're gonna have die-to-die interfaces in multiple points, like at least three points in the chip, right? Like, as opposed to putting two chips, uh, then you need only a die-to-die link in one spot on the chip, you know, that kind of stuff. Like you're gonna have these blocks that are dropped in in different places to make slightly different variations uh of the main chip. But underlying thing was going to be very, very uh similar. And then the work really comes in in you know integrating these things into different product lines, testing them, making sure they all work. Because just because the underlying pieces are similar-ish doesn't mean that you can skip out on the the testing and the validation and you know mass production of these chips. So all that work gets doubled up. Design has a lot of underlying things, but a lot of these things get doubled up. They have something called qualifying by similarity. They'll be like, okay, the change was very minimal, so I think it's gonna be okay. So we're just gonna run these incremental tests on this product to make sure that things work out. But yeah, there's a lot of work to be done, and that's why Meta has so many people working on this stuff.

SPEAKER_00

Yes, and they acquired Rivos, I think was the name of the company, um, which apparently, uh, according to the research I did, that company was involved in helping design the original NCIA one and two. Um, so I think it was a natural fit, a natural acquisition. Um that company had like all the article, I found this great Next platform article, and it was like they had a hundred engineers on day one, and then they hired like another 50 over time from Apple because some of the original founders had been at Apple. Um, and then there was a lawsuit, and but then um Lip Boot Lip Butan was an investor at Walden Investments or Walden Capital, whatever it's called. And so he came in and he helped like figure that part of things out. And I was just like, man, every story I read that's about a startup or otherwise, Lip Butan is part of the storyline. He's always there, he's all over the place. That's why he's a legend. Exactly. He is a legend. Um, okay, so we've been talking about this a long time. We're almost to the end of the episode. Uh, we have to talk optics because everyone wants to talk optics these days. OFC is happening next week and GTC, um, so there'll be lots of optics news. But let's hit a let's shift gears, AAOI, applied optoelectronics. Give us like a let's we'll we'll try to keep it fairly short. So um sorry, listeners, we're not gonna go crazy deep. But Vic, who is this company? What do they do? Why are they interesting? Why do readers want and listeners like want to hear about this company?

SPEAKER_02

Yeah, so we'll keep it short, but we'll keep it like info-packed to the highest uh extent possible because there's so much going on around applied optoelectronics that makes it very interesting. So, but let's, for those who are not aware of what this company does, this is basically uh a small uh, you know, Texas laser shop, right? That's uh seen a stock growth of uh 700%. I think it's gone from like 10 bucks to like over 100 bucks in like six months or something. It's crazy. And one of their claims to fame is that they manufacture everything vertically integrated from the actual indium phosphide substrate and deposition of you know uh like all the three, five materials to make the lasers, all the way up to manufacturing the laser light sources and making modulators and integrating them into packages and making connectors. So they have apparently everything down, you know, from the start to finish.

SPEAKER_00

Let me ask you a quick question there. Yeah. So you said substrate. Do they buy the substrate and then do the epitaxi on top and keep going?

SPEAKER_02

No, I think they actually again see, like I think they actually own everything, including the substance. So you think they grow their own substrate? I think they grow their own substrates. Like I said the same thing about um what was it, coherent? No, yeah, yeah. Was it Lumentum or Coherent? I think it's coherent. I think it's coherent. And somebody like uh got back in touch with me, and this is why I like our audience, a very keen bunch of people. They're like, Do you have any like did you read anywhere that this is actually true, that they actually make the wafers from ingots or whatever? I'm like, let me look again. And so I went and looked, and like coherent, I don't think so. They buy wafers too, right? So we were talking about the supply chain problems from China for buying the substrates and all that in a previous episode, and we were like, oh no, coherent doesn't have this problem. Actually, they do, right? That's what somebody set me right on, and thank you for that.

SPEAKER_00

But yeah, because it was like they grow their own silicon carbide ingots and dice their own wafers from that, but not indium phosphide. They buy indium phosphide, something like this.

SPEAKER_02

Yes, anyway, to what in regarding this one, um they uh I think they grow their own ingots, they manufacture everything end-to-end. So that is their claim to fame, right? But really, what this company is, is like it was a famous company for making cable TV stuff, actually. And funnily enough, in all this like noise about AI and everything, their cable TV business has like exploded. I don't know, it went like you know, something like 3x year over year or something like obnoxious like that. I'm like, who who cares about cable TV? It turns out there's something called Doxis4. That has come out, and it requires a whole bunch of like technology changes, and this company is like perfectly well positioned to do that. And the cable TV business is still like half of the company. Uh yeah, really.

SPEAKER_00

It's like the cash cow. Yeah, it's not. Now tell me really, really quick. Like when I think cable, I think cord cutting, like cable dying. So did did their business ever like take a hit from cord cutting or something? Or were they not is that just a more of a macro thing that didn't impact them?

SPEAKER_02

I don't know. I don't know how cord cutting. That's why I'm always like, what's cable TV? What's all this stuff? I don't know. Somebody should let us know. Uh, but I have to go read about the cable TV. Carry on because I don't know the cable business very well myself. I mean, like in the in the era of AI, like, why would I go research a cable TV business? I mean, like, I'm not very motivated. So I didn't do it.

SPEAKER_01

Yeah, yeah.

SPEAKER_02

Right. So that's so that's their whole thing. So now what they can do is that they they got into the optics business, and you know, they can make data center uh and communication products, and it's fantastic. And so everybody is really interested in this company um because they can make their own lasers too. And so technically, and we'll argue this a little bit later, but technically they're not beholden to buying lasers from another company in theory. In theory, because they can make their own. So if they can make their uh if laser supply is an industry bottleneck, which we know it is, this self-sufficiency is a big advantage, right? So this is one of the bull cases.

SPEAKER_00

Self-sufficiency and made in America from start to finish.

SPEAKER_02

Very important in the in the global political situation we are in right now. If it's made in US, it's like a pro it's like a big problem is solved that you don't have to worry about tariffs and all kinds of stuff. Exactly. So that is a big a big attraction too for this thing. And it they do everything, they do like uh electroabsorption modulated lasers, you know, those EML chips we talked about when we spoke about Lomentum. They do pixels, uh, they do CW lasers for like uh CPO, and they have all these transceivers they build everything, right? So, so what's not to love about this company, which is why its stock has gone up. And what is really interesting about this is that Amazon signed like a$4 billion 10-year purchase agreement warrant uh with this company, which is a big deal, right? Like, why would it for 10 years somebody's committing to buy your stuff? Like that's a major positive sign. So when this happened, the stock like jumped up like 50 almost 50%, like 45% or something.

SPEAKER_00

Yeah, big stamp of approval, which by the way, we should literally do an episode on Amazon and warrants in startups. It's very interesting. Like all sorts of AI startups, but even like Rivian, you know, Amazon use them for electric vehicles, like vans, like transport vans, and so they get all these warrants. And it's and so then when and I and I share this because AMD has granted warrants to people recently, and everyone was like, Oh, that's warning bells. But I was like, no, no, this actually happens all the time. So conversation from another time, but it is a great stamp of approval uh for AAOI.

SPEAKER_02

It is, right? And then the question is like, is optics really important for AI? And then you know, Nvidia shows up with four billion dollars to Lumentum and Coherent. And then what happens is Lumentum and Coherent this month are going to get into the SP 500. I'm like, all these signs are like strong signals that optics is kind of important now. Like, imagine making it into the SP 500 as like a laser company or whatever, like it's important. That's it.

SPEAKER_00

Totally, totally, totally. Which there's actually a bunch of interesting implications that we could talk about again another time there, because um, once you're in the SP 500, you become part like in a part of an index fund. Like if someone buys an index ETF, everyone's gonna invest in you and it's just weighted. And so now you have to start to ask oh, um, is Lumentum in coherence stock price moving just because of this effect? Or is it moving differently because of this effect rather than purely only the comp like investors who really understand the company buying or selling?

SPEAKER_02

Yeah, exactly. So this is the thing about all of this, and there's an argument be against uh, you know, in a sense, is that are there EML chips like really if you're gonna do this, uh let's not talk about CPO. Right now, the reason Lomentum is all the rage is because their 200 gig EML uh modulators are best in class, right? That is why we spoke about even coherent their competitor buying EML from Lumentum. This is the theory, at least, that nobody's able to make these lasers as well as Lumentum. The same thing applies to these folks. Like, really, uh, you may be vertically integrated, but are your EM EML chips really any good? Or like are they lagging? Um, and do you have to buy this also from Lumentum? And that all your vertical integration is like advantages are gone when you can't make a good enough EML. Right.

SPEAKER_01

Yeah. So can they can they make a good enough EML?

SPEAKER_02

That's no, I I don't know. I don't think so, because otherwise everybody would be buying from uh applied opt-to-electronics too. Like, why would Lumentum have such a stronghold on uh 200 gig EMLs, which are the state of the art EMLs? So that's a it doesn't tell me that like obviously that they are uh amazing at doing that, right? So the play here is that yeah, CPO needs to show up. And even CPO shows up, again, this is where Lumentum's mode erodes. In a sense, nobody really requires these EML lasers because the modulation is going to go on chip into a silicon photonics chip where they use either a Mech Zender modulator or like uh NVIDIA does use a ring modulator. And then all you need now is like what we spoke about in the last episode, like a flashlight. We just need a laser source, it doesn't have to be modulated, it doesn't have to turn on, it doesn't have to turn off. You just have to make a laser light that comes out at a substantially high power, typically 300 to 400 milliwatts. And uh that's then going to be modulated by the silicon photonic chip. Now, apart from the power aspect, yes, that's challenging to get. Okay, these are ultra high power lasers. The real um moat does not exist anymore. Like coherent can make these two, right? Like Momentum can make CW lasers, these guys can make CW lasers. So now what? Like, okay, so coming back to applied optoelectronics, what's your moat now? Because are you are you the leader in uh are you do you actually what is the wafer size you're going to run on? They don't have six inches, they have like maybe four inch uh wafer lines, even running CW lasers. So, where's the benefit here? So if Coherent comes out with a six-inch uh fully yielding CW laser, which is easier to make than a 200 gig EML laser, then what's the problem? Why do we need like we'd go with Coherent, right? Like they're a bigger company, they are all the size that that this company wants, they want to be like applied opto electronics wants to be the size of Coherent, but they are not, right? For sure.

SPEAKER_00

So yeah, what I hear you asking is how does applied optoelectronics differentiate? Is it performance? Or does perf or are is all the performance of everyone just like good enough? It just has to be good enough, or or are they gonna try to differentiate performance? Is it cost? Like, oh, you're somehow cheaper. Um, oh well, to your point, you're not on six-inch wafers, so there's a big cost driver. Now, to the cost point, I will say I know like a lot of people have talked about um applied optoelectronics has like automated a ton of their manufacturing. And I know that like the the manufacturing and packaging and assembly of these uh chips is not easy. And and I saw in one of their like 10Ks or something that they said, I I I think I even wrote down the quote, they said, uh, let me see if I can find it. Eh, I can't find it. Um, they said, our, oh, here it is. Our relatively more automated production process for certain optical modules also allows us more freedom in locating our manufacturing operations in customer favorite geographic locations while maintaining relatively low labor costs. And what they're saying is we can build in America, but only at say like a 10 to 15% premium to building in Asia. But we can only do that because we've automated everything, which by the way, that's a podcast for another time, which that is the future of physical AI, in my opinion, is if you want to keep manufacturing things in America or bring even do even more manufacturing in America, the only way to do that in like a labor-effective way is to totally leverage robotics, humanoids, automation. This is an example, but we we'll we'll move past that. Um, but okay, so the question is can they compete on cost? I don't know, maybe. Um, can they compete on performance? And if it's not those, um, is it just that there's such a supply-demand imbalance today that they can win for a while because people want to dual source and that everyone just needs lasers? And so they're gonna win for a while. And but then the question in the long run is like once that supply-demand imbalance shakes out, are they good enough that customers, is it customer relationships is how they differentiate? Are they good enough that the customers will just say, like, we like working with you, we want to stick with you, even though now there's enough supply we could buy from anyone?

SPEAKER_02

Yeah, so the the they're building out capacity aggressively, like it's literally like their revenue in 2025 was like 455 million, and their projected revenue in 2026 is over 1 billion. So they are actually you know building factories like this. Is a Texas company, like so it's in Sugarland. And I I lived in Texas for quite some time, so I kind of know that this is near Houston somewhere. Um, I say it's a suburb of Houston, I think. Anyway, um, so yeah, they're gonna go like the current production capacity uh for 800 gig units is up like 90, 90,000 units, but in the end of uh 2026, they're gonna go from 90,000 to 500,000, right? Of combined 800 gig and 1.60 units. So they are expanding big time. This is the same. And I think the automation the automation might help ramp up capacity. Yeah, maybe, maybe, yeah. So, you know, that's the whole thing. Like, this is a company that's building out, you know, like really rapidly, and the stock price has gone up, and not everybody thinks this is um going to go up forever. Like, I was reading uh Citrini Research's recent um uh optics posts called Let There Be Light, and there he actually calls out uh the top of uh this he says applied electro opto electronics is kind of hit its top. Like he's he's not for it, he's not for this talk anymore. And some people are like kind of don't disagree with Citrini, which is which is fine. Like uh, but yeah, it's is it's very interesting because I want to point out one historical note here, and I think this is interesting because this is a company that in 2027, when compared to now, has something very similar happening. So in 2020, in 2017, like applied Optro Electronics, what happened was there was this whole data center upgrade cycle from 40 gig to 100 gig uh connectivity upgrades, right? And at that time, their stock went up from like 12 bucks to 105 bucks or something like that, 105 dollars. Something very similar to what's happening right now. It's literally the same price points, too, which is amazing to me. Like crazy, and in a short span of time, right? And it's the same management team, right? There's the same people here, and they had the same vertical integration narrative at the time. Oh, we are good, we can do this, we are vertically integrated. But they had the same problems too, which is even today they have customer concentration. Actually, now I I was looking at it, but uh they were like 75% concentrated in top two customers. But today I think it's even like more than that. It's like 80, 80%. So their customer concentration is very high. So these parallels are crazy, and their anchor customer in 2017 was actually Amazon again. Oh, really? Yeah, okay. So even now it's Amazon's 4 billion warrant deal. However, this is this is uh a warrant deal. So this is like a guaranteed purchase thing that is different from in what happened earlier. And all of this is exactly the same, right? So, but what ended up happening was eventually this stock hit its uh peak of like a hundred over a hundred dollars. And then what happened was in one quarter uh it seemed like there was soft demand. Um, and then the stock dropped, and then the stock dropped, and then the hundred gig um ASPs collapsed, and then they had, you know, because they owned the whole world chain, their laser reliability became an issue and their laser started failing. Uh and then uh their you know, their so-called benefit of being vertically integrated became a problem. It hurt them because now they have to fix their own problems. If you're buying a lumenum from layer, like laser from lumentum, and if that screws up, you know, it kind of affects all your customers in some way. But if your problem is yours uniquely, you have to fix it, you know. So that's a it can bite you back, actually, you know.

SPEAKER_00

Yeah, yeah, this is so interesting. Yeah, it sounds like what my takeaway is like for every transition, uh 400 gig, 800 gig, 1.6, 3.2, they need to participate in every single one of them and try to stack those S curves. Because to your point, um, if there is now light and or Lumentum and Coherent and these guys and InnoLite, like if we just keep getting more people playing, there's going to it means that as competitors ramp, someone's always first, but as others ramp, the ASPs are gonna go down because there's gonna of course there's a macro supply demand to deal with, but over time your ASPs are gonna go down. So, like it's gonna be a volume game, and you just need to get to every next generation as fast as possible.

SPEAKER_02

Yeah, yeah. So I'm gonna I've like it's since we're coming up on an hour, I've saved the best for last. Okay, let's you know the stock price that it hit at the bottom after this great crash out that happened from the 440 gig to 100 gig transition, it hit a dollar 48 cents. It was over a hundred dollars, it lost almost 99% of all of it. Oh man, and it hit its bottom. I yeah, I think it's like 2022 sometime. So, this is this whole story is like not too long ago. And people who are looking into this or trying to put down money into this, again, not investment advice, uh, but just you know, think about the history of what has happened in the past and how things are different because this is a little bit different. See, that was uh uh an upgrade cycle of uh networking speed. Here, this is not like the AI CapEx build-out is a structural theme that is you know over the entire industry. This is not something that is narrow to one industry as tiny or anything. This is like everything is building out at crazy speeds, all of them are investing money and all kinds of stuff is happening, and um it's a multi-year build out, right? It's like I don't know, it's kind of still we're still increasing capex, so whatever. So I think it's different. And Amazon's warrants lasting 10 years helps too. It's not the same as like what happened in 2017. So there are differences, but be be aware of history and how it has played out in these kinds of things. I think it really helps. Yeah, yeah, history rhymes.

SPEAKER_00

Totally. Wow, what a wild ride for their management and any employee, like imagine that.

SPEAKER_02

Insane. And now they're back up by 100 bucks, and you would have probably dread looking at stock prices completely.

SPEAKER_00

Oh, totally, crazy. Well, you know, I think there's more to talk about then in the future too, but this was good. This is very interesting. I can't imagine going from uh$100 down to a dollar and a half, um, and then maybe back up to whatever they're at today, which I don't know what they're at. But uh wow, crazy. And hey, you know, for anyone, any small micro cap who's struggling, uh, here's an example that you can rebound, you can stay alive, you can come back. All right.

SPEAKER_02

Yeah, homework, homework exercise. You know, go to uh any stock chart and look up AAOI and uh look at the story, look at the peak and look at the trough, and you'll see what this whole thing is about. And you look at the peak again now.

SPEAKER_00

So it's uh it's it's impressive. It's we'll have to come back in a year and see what happened. Yeah. Um okay, listeners. If you're enjoying semi-doped, thank you for listening. And we'd love if you give us a five-star rating and quick review and Apple Podcasts, Spotify, YouTube, or wherever you listen or watch. YouTubers, thanks for all your comments. Um, we just broke a thousand YouTube subscribers, I believe. So that is awesome. So if you each go tell a friend, maybe we can hit 2,000. Um, subscribe to our newsletters and thanks.