本帖最后由 埃律西昂 于 2022-12-3 22:17 编辑
Source: https://event.webcasts.com/viewe ... p;tp_key=79ddb5e667
录音转文字,所以肯定有不准的。末尾部分存在一定缺漏(录音时出现了缓冲问题)。
Thank you so much, mark. Thanks you have me here.
So I'm extremely excited to have a discussion with you about architecture stuff and everything that's going on at AMD over the past 9 years. I'm going to start with a little stat though.When Mark joined October 24th of 2011, I mailed that A-M d's enterprise value is $4 billion. Last night, the market closed, the stock was at $116000000000 enterprise value. So I'm going to give you a ton of credit for that mark, phenomenal job. And I think a lot of that is driven by the innovation engine, you know, what you drive from the company's perspective. So maybe we'll start the discussion with just a quick overview of how you think about AMD thinks about executing on a product roadmap, the vision. We'll talk about engineering organization and the size and how that's expanded, and maybe talk a little bit about the overlapping roadmap strategy and wherever you want to take that. Because I think that's really at the crux of what the AMD stories become here over the past many years.
Well, thank you. And it is a big piece for the A-B story is our engineering execution. But it's really about having a clear vision, a clear goal. And, uh, you know, when uh Lisa and I were recruited into AMD almost uh uh for me? Eleven years and just just about eleven years ago for uh Lisa, uh before she stepped into the CEO roll it, it was with a mantra to drive AMD back into sustained execution. And both of our backgrounds, we had worked together back at IBM for many years, and we're, uh, you know, very knowledgeable in terms of what it takes to transform, transform transforming leaders and technologists means you have a clear vision of where you're going, and you set out a clear methodology and process of how you achieve those goals, and you line up the business objectives.
So it's not actually just engineering, it has to be engineering in business, and it has to be on the foundation of a culture. And, you know, that's, that's what's really been absolutely critical, UH, for uh, what the, you know, the financial results? Uh, errand that, you that you uh, summarize when you look back over that, the that, uh, decade plus, and, you know, for us, you know, there were some fundamentals Andy has to have a competitive CPU. It ties into everything that we do. We are, UH, you know, it's, it's the heritage of the company. I mean, that's what led to the early success of AMD. And so A-A lot of the 1st focus was on writing the CPU road map, and that is where, ten years ago, we launched what was the architecture phase, well, what became the zen x 86 CPU family. And it was set out to be, you know, not only a competitive processor, but a family of processors. And here we are, just having released our are 4th generations in 1st, you know, middle of this year, in client desktop. And just just recently with our 4th generation epic servers. So, you know, setting, uh, you know, setting a clear goal to to have leadership x 86 compute capability at a base, but also with a vision of, how do you build around that? How do you be more facile? And so we did from the outset architect. And that's what a lot of people don't realize, is that they all look at Zen as being a catalyst, that that new competitive and leadership cpu architecture being a catalyst ren d But on the technology side, equally, was how we architected what we call the infinity architecture, how all the pieces come together, and how they in fact, scale. And that was critical frame. D-A-B had acquired ATI and and had huge pieces of ip around graphics and video acceleration, audio acceleration. These elements that now you take for granite when you buy our laptops, and all those elements just, you know, are so seamlessly woven together. I'm sure we'll talk about it later. We've now done the same thing in the data center, across our CPU and and GPU, an adaptive compute, but, but we, we laid the groundwork for that over a decade ago, as we as we started the architecture of that infinity architecture, the, you know, the thing about semiconductors is people don't realize in software, you can make a change of direction. You can call the play, and you can execute very, very quickly. In six months to a year, you can have a new direction set out. But in semiconductors, it's a longer lead time. It's four to five years when you set out a new direction. And so, uh, we did set out those new directions right away. But that culture of execution that had immediate effect. So putting that in into play allowed us to win game consoles, which were key in the early years, allowed us to revitalize our our graphics road map and and get into a culture of execution across the company that when you look at it right now, and you just look at just take the last five years, it's been a huge differentiator for us. And and, you know, when Lisa became CEO, she really galvanized the entire company around this culture of execution, around a culture of listening to our customers. So that we make sure that I'm aware that we're targeting is where the customers need and on really excellence of and quality. So it's, it's, uh, you're truly a fundamental underpinning of of format you lutitude. So as a great overview mark, when when we think about A-M-D in that that roadmap, execution, the Zen architecture, in really going all in with A-A chiplet based architecture versus, you know, the the historical industry being more homogeneous chip architecture. You know, as you think about the road map, again, always probably thinking out the next four to five years, how far do you think that takes us? You know, at what point do we have to think about another novel approach to an architecture besides just chiplets? And where does that stand on your thought process? Well, the way I suggest that we all think about it is innovation always finds its way around barriers. And you've all heard many times moore's Laws slowing down. Moore's Law is is dead. What does that mean? It's not that there's not going to be exciting new transistor technologies. Actually, I can see exciting new transistor technology for the next, you know, as far as you can really plot these things out as about, uh, you know, uh, six, eight years. And it's very, very clear to me the advances that we're going to make to keep, uh, you know, improving the transistor technology. But they're more expensive. Used to be the old moore's Law, you could you could double the density, and every 18 to 24 months, but you'd stay at that same cost band. Well, that's not the case anymore, so we're going to have innovations in transitional technology. We going to have more density, we're going to have lower power, but it's in the cost more So how you put solutions together has to change. And we did see that coming, and that's that was part of the motivation of the invented architecture that we that we just spoke about, because allowed us to be very modular and how that we designed each of the elements, and that put us in a position to be able to leverage Chiplets. Chiplets Uh is really a way to just rethink about how the semiconductor industries going forward. There's a lot of innovation yet to go, because it's going to be the new point of of how solutions are put together. It used to be a motherboard, and you put all these street elements in a motherboard, what will keep innovation going? And will keep, I'll say, a moore's Law equivalent, meaning that you continue to really, UH, you know, double that capability every 18 to 24 months. Is innovation around how the solutions put together. It'll be heterogeneous. You won't be homogeneous, so you're going to have to use accelerators, GPU acceleration, specialized Function, adaptive compute, like we acquired with uh Zy Links, which closed in February this year. So those elements are going to have to come together. And then how you integrate it is you're going to see a tremendous innovation on how those come together. And it really will keep us on pace. And we actually have to. Because you can just look at the demands of computing. They haven't sold down one iota. In fact, the rescalating rapidly, with the AI becoming more or more prevalent. And kind of on the as a side sidebar of that, that comet, you've obviously had tremendous success in the hyper scale cloud. You know, customers are those cloud customers coming to AMD today and saying, look, you know, we used to use, you know, x 86 kind of general purpose computer. But they're increasingly coming in and asking you to optimize compute platforms. Where there's, you mentioned heterogeneous compute, that there's more, you know, specific design, architectural things that they're doing hand in hand with A-M-D, you know, the kind of optimizer data center performance and power efficiency. It's, it's definitely the trend. When I remember again, when I started, just over a decade ago, I-I talked to the head of the infrastructure of the largest hyper scale cloud offering at the time, and that that leader told me, mark, we're going to be homogeneous. We're not changing it. That's how we get our efficiency is we're going to have, you know, just one family of A-A, you know, a CPUS across our data center. But for the reasons I just sat a moment ago, all the data centers have changed, because you can't keep pace with the computing demands. If you you have, you know, just one single, you know, x 86 approach, so you need flavors. X 86 is, is it dominant? I SA architecture out there today, so it's easiest to adopt. Uh, doesn't. Doesn't have to be x 86. We can get get back to that in in in a moment. But, uh, we are already customizing. When you look at our hyper scale installations, we are already tailing to the kind of workload that they have. Is it image recognition? Is it search? Is it, you know, a EDA electronic design automation that needs a high frequency offering? So you look at our instances today on CPU alone, and you'll see many variations and more to come. We'll talk about Bergamo, our dense core that goes head to head with, you know, smaller arm cores, where you just need to put processing. Those are all tailored adaptations, which we work with hyper scales, because we listened, because they told us what they needed to have cost effective solutions. And you'll see more and more accelerators adding into that mix. Microsoft announced that they have our instinct, our GP acceleration, now up and running. And as you're for their training, yep, that that's fantastic. And well, definitely in the 18 minutes we've got left, but we'll try and get to some of those. The the biggest, you know, I think excitement recently has been this continual, you know, momentum you've had in the server market. You recently launched, you know, this end for Architecture Genoa, and, you know, maybe maybe take us through, kind of, what are the key architectural things that are are in general, you know, that that you're excited about. And And where I'm going to go with this ultimately, is, how are you expanding your ability to address the server market? Because I think that's probably an under appreciated element of the AMD story, just that ability to expand, you'll wear breath at the product portfolio. Yeah, so it's a great questionnaire, and let me take that almost as a two part, if that's okay. Because so let me 1st talk about, uh, general. Uh. We couldn't be more proud of of general again. We we try to really listen to our customers. They don't want marketing. They just want total cost of ownership advantage. And so with with Genoa, it really delivers that. And it delivers it in a timely way, because when you look at the server fleets out there, people, you know, there's a major refresh cycle coming. And so if you look at I-T operators from a hyper scale across enterprise, they're looking to really improve their total cost of ownership. Typically, you're actually powered limited on how you can achieve your total computing needs. And you're looking for economic growth, what General does is it leverages the fact that we took the CPU complex and moved it from seven nanometer to 5 nm. So it's on the cutting edge. Kiss some c five nanometer. Remember what I said said earlier? Um, the new transistors are still giving you more density, uh, and, uh, you know, and and more, uh, performance for what? Uh? So we combine five nanometer on the cpus with our design techniques. We we partner very closely from a design and technology standpoint with TSMC, and we improved 48% on the efficiency of computing. So it was a huge generational gain on performance per watt. And so that's how we're able to go from 64 quarters in a single socket to 96 cores in single socket. And so it that's element number one, is really driving a very, very strong compute. Just on the raw core capability, it was our biggest generational gain of that kind of efficiency. But our customers also need balance computing. That's only as good as if you feed it you have the Ione memory. So we jump to PC I-E JN five. So we doubled the I-O bandwidth, coming in out, and we went from DDR four to DDR five, the newest memory, uh, which runs much faster. So you you and we increase from eight lanes going out to memory to twelve lanes going out to that memory. So a significant memory bandwidth, and I owe bandwidth. That's how we're able to jump to nicer scores. Have that energy efficiency leveraging a TSMC. We kept the Iowa memory on the older, more economic node, so it kept it, you know, kept the costs in control. And and that's again, our chip with architecture. So we have different technology nodes all in a single solution, and the result is just a, you know, a mass of tc O benefit for the customers. And part two of your question Exactly, we're expanding our time. And so, you know, when you have that kind of offering, what we're able to do with that kind of performance is one, we offer Genoa to sit right on top of our 3rd generation Epic Milan, because Milan is still a leadership a processor in the server market. And so one we have the from top to bottom a stack, is incredible coverage now, uh, with with the kind of granularity that our our customers need to really cover hyper scale through enterprise. And we are adding, in 1st half of this year, what we call Bergamo, which will be with our zinfor c uh. We we increased staffing to our CPU team, and we added a version of Zen for it stills in for It runs a code just like Genoa, but it's half the size. And that competes head to head with graviton and arm based solutions, where you don't need the peak frequency, you're running work clothes, like Java work clothes through foot work clothes that don't have to run peak frequency, but you need a lot of cores. So we adding that in 1st half of 23, and then later in 2020, 23, we're adding the um Siena, which is a variant targeted to on a telecom space. So it we're really, really excited about our our time growth. And, uh, and server? Yeah. So, so one of the questions I get is that as you as you clearly executed on on gaining share in the server market, taking that performance leadership and continuing to build on that, um, you know, the question I often get is, you know, how does pricing factor in to the competitive landscape? And, you know, there's always a concern that, hey, a competitor is going to get more aggressive, maybe arm shows up more. How do you see the pricing envelope factor into your strategy? Uh, from a server side perspective, that's a great question. And that's, uh, really, uh, you know, one of the, uh, driving forces, UH, in the time expansion. So what what you're seeing is the market is growing so dramatically, it's drawing new competitors in. And everyone's looking for their niche, and that's where you're going to provide it, you know, if you can provide a niche where you're really tailoring to a specific workload, then you can drop out circuitry not needed for other workloads, and and, uh, you know, have a more economic solution. So it it really Aaron was a driving force in us expanding the offerings that we have. One, uh, what I talked about earlier, positioning, uh, general, at such strong performance on top of Milan. So that gives us flexibility of price over that broad range of offerings that we have. From Genoa, from Fort gen Epic. Uh, inclusive of 3rd Gene epic Uh. But also, uh, again, uh, with Bergamo coming with that dense core in 1st half of 2023, is also intended to give us that tco vans. People don't buy on on just pure price. They're looking at the total cost of ownership, and they're looking at that total cost ownership for their workload. So the way we're attacking price is making sure that we have the configurations that are tailored to the workloads our customers are running, and our price to give them significant total cost of ownership advantage. with with the Genoa Genoa X, you know, our generacy Burgamo, siena. Is there other white space that you see in the data, in the data center? A-A Rita, for you guys to continue to expand a portfolio. Well, you mentioned General X. that that I and I-I didn't mention that in the variants, and I'll add that now. Uh, that's a version where we stack cash right on top of the CPU, and that really tailored to make high performance workloads, um, like EDA, or database workloads, even more, you know, T-C-O effective. So, yes, with uh, uh, we, we, we've covered, uh, the I'll say, the prime markets. When you start looking beyond the variants that we have now, you start getting tomorrow, I'll say, corner cases of the market. And again, we will, we will listen to our customers work. Clothes change over time. And particularly now you're seeing, again, AI come into as a workload affecting almost every kind of application. So we're building AI into each one of those variants, and that's to us, is the the white space that we're we're now coveringly started it with Fortune Epic with Genoa. And you'll see more and more AI capabilities in our world map as we go forward. That's perfect. So I'm going to maybe shift outside just the server side of the world. But, you know, one of the other things that A-M d's done is, you know, you have a GPUS strategy with instinct. You mentioned Microsoft up and running with with instinct instances. Uh, you've got enough pg A strategy with this eye links acid. They had some data center strategy there. You have Pinsando, which you bought, I think it was earlier this year for data, you know, DPS. Where? Where do you see, you know, when? When do we start to see maybe some of these other adjacent data center pieces of the portfolio? You know, how are you thinking about those materializing? The two acquisitions are in that you mentioned ZI links and Pinsanda were fundamental. I don't think people quite realize how important those acquisitions were in terms of rounding out andy portfolio. So when you think about what Zylings brought to bear, it is adaptive compute, which is inclusive the FBJS, but it is also where even more tailored solutions are needed. So it has embedded armed cores, uh, you know, higher performance, uh, armed course, has embedded accelerators. Along with that adaptive compute, along with networking capability. It brings to bear a very strong embedded track record with telecommunications, defense, UH, you know, a broad range of applications and a growing footprint in the data center. And with Consando, we have a programmable smart nick that's absolutely a leadership play. It's been, it's been adopted in in hyper scale, and it has 144 p four engines. P Four is a programming language now becoming the DEFECTO standard to allow microsurvices to come into the data center. And the PINSANDO offerings, now in its 2nd generation, is absolute leader and flexibility, being able to tailor these solutions, whether you need software to find storage, whether you need a firewall, a deep packet inspection capability, whether you need, you know, optimization of your of your flow offloading, uh uh, off loading capabilities from the CPU. All of these are examples of where, uh, the Smart nick can be deployed. And so, you know, these, these editions are really enabling us to deepen our footprint with our customers. And honestly, when you look at the trend, you're seeing the need for performance such that the CPUS, the accelerators and the connectivity, have to move in tandem to provide the kind of performance of that's needed going forward. Yep. Yep. Um. We touched one of the earlier in the architectural overview, But I want to double click on INFIDTED, the architecture a little bit, and maybe understand the the evolution of that, because it seems to be a key building block of the strategy, and probably remains the case going forward. How do we think about what you see in infinity architecture, as far as the evolution going forward? Does it ever becomes off chip interconnect? Is there more to be set around just just infinity in general? Yeah, when when you hear that term from A-M-D, you need to think about it. It's andy's scalability architecture. It it's what allows us to go from a one socket CPU to a two socket CPU and scale almost almost linearly. Why? Because that infinity architecture, connecting one socket to two socket is the same guts, the same technology guts. That's in a single chip implementation of our cps. So as we connect cpus on a single dye, and then we connect, that single will die to another die. Its same architecture, seamlessly, allowing you to scale. Then we extended that to GPU. When you connect a GPS, if you to see if you it's at same Infindy architecture, that same fundamental approach, allowing you to have very, very high bandwidth, high connectivity at low latency to allow you to scale CPD GPU that we do that, of course, in our client products, but with what we announced it, we've rolled out with our our next generation instinct that we're already already have, uh, you know, back in in the labs, or M-I 300. It is a true data center APU. It's a CPU and a GPU acceleration, which which is leveraging the infinity architecture to share the same memory fully coherently. It's all sharing a high band with memory. And so it goes beyond that. So it's not Now, when you think about what I commented on earlier with Chiplet, the Infinity architecture going forward will encompass Chiplet to Chiplin interconnect. We do that already today with our chiplets. We have over 40 chiplets in in production today, but we are found a member of Ucia. Ucia is the new standards can take a few years, but that standard has tremendous momentum, and it will create an ecosystem of how chiplets can put, be put together, and that allow us to even further help hyper scalers tailor their solutions, when you can put, you know, a hyper scalar accelerator together with our chips. And we have a so my custom division, we call S3, that's already stood up and working with our in customers to enable that kind of customization. So, infinni architecture, I was architected to be tremendously flexible, and in his absolutely key for us to hit the industry trends going forward. So so this might be, you know, not related. I'm not, I'm not quite sure, but I've written a lot our teams, written a lot about CXL as an interconnect and connecting, you know, big pools of memory and and, you know, other peripherals the CXL? Is CXL competitive with infinity architecture? Is it a compliment? I'm just curious how you see the parallels between what seems to be more and more kind of interconnects. Yeah, so those are armed to move. C-H shelves, connect, express link. And it's a standard that is, um, will actually, you know, be used as well as part of the whole chiplet interconnected. But it it allowed, um, a, in a standard way, for us to avoid bus wars in the industry of how we all attach it, will acceleration or memory extensions onto our compute complexes. And, uh, it's a great story of collaboration, by the way, because uh AMD led the way with zylinks, with arm, with others, and were going on a standard that was called CCX, and Intel was going another direction, and all of the parties came together, uh, and uh, antel had, uh, had uh, started, UH, the proposal c. x. L. And, you know, we looked at and said, look, if we can make this a level plane field, we can avoid bus wars in the industry, different standards that just compete each other and compete against each other and prohibit an ecosystem coming from together. And so we did bring that course consortium together very successfully. And in general, we do have support for CXL. We call one dot, one Plus. That's the stand under the 1st generation. And why we call it plus is we added the support for what's called type three memory pooling so on for gen Epic, our general system. Today, you can extend the memory and and add memory. By the way, you could, as I told you, genoa's DDR five, the latest memory. But let's say you want additional memory that's less expensive, you want to run DDR four, you can use c XL to have DDR four to extend your memory, and you can actually pool the memory across different notes using a CXL switch. And it's just the start. You're going to see CXL over multiple generations, create a whole ecosystem of accelerators and solutions. By the time we get to generation three of CXL, you're going to see it support even clustering solutions across the standard. So Mark, we've got 2 min left, and this is probably a longer discussion than just 2 min. But I'm just going to ask you, because I get the question left arm in the data center. How do you see arm, uh, architecture evolving competitively? And I know you mentioned Gramo as a competitor. Yeah, so it it's really, uh, people uh, get confused on this point. They think it's, you know, an arm versus **, 86 versus risk five. It's really all about the solution that you put together. The reason we had, we had, actually some of you recall, we had our roadmap. When you go back eight, nine years ago, had both arm and x 86 and a road map, and we de featured the arm in our seat, because the ecosystem still had too far to go. We we could have made that, you know, we we had a design approach that was going to make the custom armed design for AMD equally perform it to the x 86, but the ecosystem wasn't there, so we we kept our focus on x 86, and we said, let's watch space. And Arm Arm is now developing more of a of A-A robust ecosystem. And you know, we certainly, um, you know, a very straightforward statue keep our x 86 performance growing. As as such, it's a leadership capability. So if you want to tap the ecosystem that's most dominant out there, we want to have just absolute leadership and capability in TCO for the commons, uh, made earlier. But if someone has reasons that they want arm we have a custom group that, as three group I described earlier, and we're happy to work with them, UH, to implement in our base solution. We're not, UH, we're, we're not married to an I-S-A. We're married to getting our customers the absolute best solution and delivering incredible value to them. And again, important note versal within the Zylings asset had arm cores. It's very it's almost like a Swiss army knife of computer, multiple different pathos. Versa. Was arm based for not changing that. Uh, pansando Arctic is arm based for not changing that. Those are great examples, because those are tailor made applications that don't need that whole ecosystem. When you when you use a Zi links device, when you use a smart neck device, you don't need that ecosystem of applications, because it's a point application. It's a tailored application. Mark, thank you so much for joining us. Say, that's great. Great over. |