找回密码
 加入我们
搜索
      
查看: 4895|回复: 2

[CPU] [文字量极大][机翻完毕]AMD CTO在富国银行2022 TMT峰会问答内容

[复制链接]
发表于 2022-12-3 21:46 | 显示全部楼层 |阅读模式
本帖最后由 埃律西昂 于 2022-12-3 22:17 编辑

Source: https://event.webcasts.com/viewe ... p;tp_key=79ddb5e667



录音转文字,所以肯定有不准的。末尾部分存在一定缺漏(录音时出现了缓冲问题)。



Thank you so much, mark. Thanks you have me here.

So I'm extremely excited to have a discussion with you about architecture stuff and everything that's going on at AMD over the past 9 years. I'm going to start with a little stat though.When Mark joined October 24th of 2011, I mailed that A-M d's enterprise value is $4 billion. Last night, the market closed, the stock was at $116000000000 enterprise value. So I'm going to give you a ton of credit for that mark, phenomenal job. And I think a lot of that is driven by the innovation engine, you know, what you drive from the company's perspective. So maybe we'll start the discussion with just a quick overview of how you think about AMD thinks about executing on a product roadmap, the vision. We'll talk about engineering organization and the size and how that's expanded, and maybe talk a little bit about the overlapping roadmap strategy and wherever you want to take that. Because I think that's really at the crux of what the AMD stories become here over the past many years.

Well, thank you. And it is a big piece for the A-B story is our engineering execution. But it's really about having a clear vision, a clear goal. And, uh, you know, when uh Lisa and I were recruited into AMD almost uh uh for me? Eleven years and just just about eleven years ago for uh Lisa, uh before she stepped into the CEO roll it, it was with a mantra to drive AMD back into sustained execution. And both of our backgrounds, we had worked together back at IBM for many years, and we're, uh, you know, very knowledgeable in terms of what it takes to transform, transform transforming leaders and technologists means you have a clear vision of where you're going, and you set out a clear methodology and process of how you achieve those goals, and you line up the business objectives.

So it's not actually just engineering, it has to be engineering in business, and it has to be on the foundation of a culture. And, you know, that's, that's what's really been absolutely critical, UH, for uh, what the, you know, the financial results? Uh, errand that, you that you uh, summarize when you look back over that, the that, uh, decade plus, and, you know, for us, you know, there were some fundamentals Andy has to have a competitive CPU. It ties into everything that we do. We are, UH, you know, it's, it's the heritage of the company. I mean, that's what led to the early success of AMD. And so A-A lot of the 1st focus was on writing the CPU road map, and that is where, ten years ago, we launched what was the architecture phase, well, what became the zen x 86 CPU family. And it was set out to be, you know, not only a competitive processor, but a family of processors. And here we are, just having released our are 4th generations in 1st, you know, middle of this year, in client desktop. And just just recently with our 4th generation epic servers. So, you know, setting, uh, you know, setting a clear goal to to have leadership x 86 compute capability at a base, but also with a vision of, how do you build around that? How do you be more facile? And so we did from the outset architect. And that's what a lot of people don't realize, is that they all look at Zen as being a catalyst, that that new competitive and leadership cpu architecture being a catalyst ren d But on the technology side, equally, was how we architected what we call the infinity architecture, how all the pieces come together, and how they in fact, scale. And that was critical frame. D-A-B had acquired ATI and and had huge pieces of ip around graphics and video acceleration, audio acceleration. These elements that now you take for granite when you buy our laptops, and all those elements just, you know, are so seamlessly woven together. I'm sure we'll talk about it later. We've now done the same thing in the data center, across our CPU and and GPU, an adaptive compute, but, but we, we laid the groundwork for that over a decade ago, as we as we started the architecture of that infinity architecture, the, you know, the thing about semiconductors is people don't realize in software, you can make a change of direction. You can call the play, and you can execute very, very quickly. In six months to a year, you can have a new direction set out. But in semiconductors, it's a longer lead time. It's four to five years when you set out a new direction. And so, uh, we did set out those new directions right away. But that culture of execution that had immediate effect. So putting that in into play allowed us to win game consoles, which were key in the early years, allowed us to revitalize our our graphics road map and and get into a culture of execution across the company that when you look at it right now, and you just look at just take the last five years, it's been a huge differentiator for us. And and, you know, when Lisa became CEO, she really galvanized the entire company around this culture of execution, around a culture of listening to our customers. So that we make sure that I'm aware that we're targeting is where the customers need and on really excellence of and quality. So it's, it's, uh, you're truly a fundamental underpinning of of format you lutitude. So as a great overview mark, when when we think about A-M-D in that that roadmap, execution, the Zen architecture, in really going all in with A-A chiplet based architecture versus, you know, the the historical industry being more homogeneous chip architecture. You know, as you think about the road map, again, always probably thinking out the next four to five years, how far do you think that takes us? You know, at what point do we have to think about another novel approach to an architecture besides just chiplets? And where does that stand on your thought process? Well, the way I suggest that we all think about it is innovation always finds its way around barriers. And you've all heard many times moore's Laws slowing down. Moore's Law is is dead. What does that mean? It's not that there's not going to be exciting new transistor technologies. Actually, I can see exciting new transistor technology for the next, you know, as far as you can really plot these things out as about, uh, you know, uh, six, eight years. And it's very, very clear to me the advances that we're going to make to keep, uh, you know, improving the transistor technology. But they're more expensive. Used to be the old moore's Law, you could you could double the density, and every 18 to 24 months, but you'd stay at that same cost band. Well, that's not the case anymore, so we're going to have innovations in transitional technology. We going to have more density, we're going to have lower power, but it's in the cost more So how you put solutions together has to change. And we did see that coming, and that's that was part of the motivation of the invented architecture that we that we just spoke about, because allowed us to be very modular and how that we designed each of the elements, and that put us in a position to be able to leverage Chiplets. Chiplets Uh is really a way to just rethink about how the semiconductor industries going forward. There's a lot of innovation yet to go, because it's going to be the new point of of how solutions are put together. It used to be a motherboard, and you put all these street elements in a motherboard, what will keep innovation going? And will keep, I'll say, a moore's Law equivalent, meaning that you continue to really, UH, you know, double that capability every 18 to 24 months. Is innovation around how the solutions put together. It'll be heterogeneous. You won't be homogeneous, so you're going to have to use accelerators, GPU acceleration, specialized Function, adaptive compute, like we acquired with uh Zy Links, which closed in February this year. So those elements are going to have to come together. And then how you integrate it is you're going to see a tremendous innovation on how those come together. And it really will keep us on pace. And we actually have to. Because you can just look at the demands of computing. They haven't sold down one iota. In fact, the rescalating rapidly, with the AI becoming more or more prevalent. And kind of on the as a side sidebar of that, that comet, you've obviously had tremendous success in the hyper scale cloud. You know, customers are those cloud customers coming to AMD today and saying, look, you know, we used to use, you know, x 86 kind of general purpose computer. But they're increasingly coming in and asking you to optimize compute platforms. Where there's, you mentioned heterogeneous compute, that there's more, you know, specific design, architectural things that they're doing hand in hand with A-M-D, you know, the kind of optimizer data center performance and power efficiency. It's, it's definitely the trend. When I remember again, when I started, just over a decade ago, I-I talked to the head of the infrastructure of the largest hyper scale cloud offering at the time, and that that leader told me, mark, we're going to be homogeneous. We're not changing it. That's how we get our efficiency is we're going to have, you know, just one family of A-A, you know, a CPUS across our data center. But for the reasons I just sat a moment ago, all the data centers have changed, because you can't keep pace with the computing demands. If you you have, you know, just one single, you know, x 86 approach, so you need flavors. X 86 is, is it dominant? I SA architecture out there today, so it's easiest to adopt. Uh, doesn't. Doesn't have to be x 86. We can get get back to that in in in a moment. But, uh, we are already customizing. When you look at our hyper scale installations, we are already tailing to the kind of workload that they have. Is it image recognition? Is it search? Is it, you know, a EDA electronic design automation that needs a high frequency offering? So you look at our instances today on CPU alone, and you'll see many variations and more to come. We'll talk about Bergamo, our dense core that goes head to head with, you know, smaller arm cores, where you just need to put processing. Those are all tailored adaptations, which we work with hyper scales, because we listened, because they told us what they needed to have cost effective solutions. And you'll see more and more accelerators adding into that mix. Microsoft announced that they have our instinct, our GP acceleration, now up and running. And as you're for their training, yep, that that's fantastic. And well, definitely in the 18 minutes we've got left, but we'll try and get to some of those. The the biggest, you know, I think excitement recently has been this continual, you know, momentum you've had in the server market. You recently launched, you know, this end for Architecture Genoa, and, you know, maybe maybe take us through, kind of, what are the key architectural things that are are in general, you know, that that you're excited about. And And where I'm going to go with this ultimately, is, how are you expanding your ability to address the server market? Because I think that's probably an under appreciated element of the AMD story, just that ability to expand, you'll wear breath at the product portfolio. Yeah, so it's a great questionnaire, and let me take that almost as a two part, if that's okay. Because so let me 1st talk about, uh, general. Uh. We couldn't be more proud of of general again. We we try to really listen to our customers. They don't want marketing. They just want total cost of ownership advantage. And so with with Genoa, it really delivers that. And it delivers it in a timely way, because when you look at the server fleets out there, people, you know, there's a major refresh cycle coming. And so if you look at I-T operators from a hyper scale across enterprise, they're looking to really improve their total cost of ownership. Typically, you're actually powered limited on how you can achieve your total computing needs. And you're looking for economic growth, what General does is it leverages the fact that we took the CPU complex and moved it from seven nanometer to 5 nm. So it's on the cutting edge. Kiss some c five nanometer. Remember what I said said earlier? Um, the new transistors are still giving you more density, uh, and, uh, you know, and and more, uh, performance for what? Uh? So we combine five nanometer on the cpus with our design techniques. We we partner very closely from a design and technology standpoint with TSMC, and we improved 48% on the efficiency of computing. So it was a huge generational gain on performance per watt. And so that's how we're able to go from 64 quarters in a single socket to 96 cores in single socket. And so it that's element number one, is really driving a very, very strong compute. Just on the raw core capability, it was our biggest generational gain of that kind of efficiency. But our customers also need balance computing. That's only as good as if you feed it you have the Ione memory. So we jump to PC I-E JN five. So we doubled the I-O bandwidth, coming in out, and we went from DDR four to DDR five, the newest memory, uh, which runs much faster. So you you and we increase from eight lanes going out to memory to twelve lanes going out to that memory. So a significant memory bandwidth, and I owe bandwidth. That's how we're able to jump to nicer scores. Have that energy efficiency leveraging a TSMC. We kept the Iowa memory on the older, more economic node, so it kept it, you know, kept the costs in control. And and that's again, our chip with architecture. So we have different technology nodes all in a single solution, and the result is just a, you know, a mass of tc O benefit for the customers. And part two of your question Exactly, we're expanding our time. And so, you know, when you have that kind of offering, what we're able to do with that kind of performance is one, we offer Genoa to sit right on top of our 3rd generation Epic Milan, because Milan is still a leadership a processor in the server market. And so one we have the from top to bottom a stack, is incredible coverage now, uh, with with the kind of granularity that our our customers need to really cover hyper scale through enterprise. And we are adding, in 1st half of this year, what we call Bergamo, which will be with our zinfor c uh. We we increased staffing to our CPU team, and we added a version of Zen for it stills in for It runs a code just like Genoa, but it's half the size. And that competes head to head with graviton and arm based solutions, where you don't need the peak frequency, you're running work clothes, like Java work clothes through foot work clothes that don't have to run peak frequency, but you need a lot of cores. So we adding that in 1st half of 23, and then later in 2020, 23, we're adding the um Siena, which is a variant targeted to on a telecom space. So it we're really, really excited about our our time growth. And, uh, and server? Yeah. So, so one of the questions I get is that as you as you clearly executed on on gaining share in the server market, taking that performance leadership and continuing to build on that, um, you know, the question I often get is, you know, how does pricing factor in to the competitive landscape? And, you know, there's always a concern that, hey, a competitor is going to get more aggressive, maybe arm shows up more. How do you see the pricing envelope factor into your strategy? Uh, from a server side perspective, that's a great question. And that's, uh, really, uh, you know, one of the, uh, driving forces, UH, in the time expansion. So what what you're seeing is the market is growing so dramatically, it's drawing new competitors in. And everyone's looking for their niche, and that's where you're going to provide it, you know, if you can provide a niche where you're really tailoring to a specific workload, then you can drop out circuitry not needed for other workloads, and and, uh, you know, have a more economic solution. So it it really Aaron was a driving force in us expanding the offerings that we have. One, uh, what I talked about earlier, positioning, uh, general, at such strong performance on top of Milan. So that gives us flexibility of price over that broad range of offerings that we have. From Genoa, from Fort gen Epic. Uh, inclusive of 3rd Gene epic Uh. But also, uh, again, uh, with Bergamo coming with that dense core in 1st half of 2023, is also intended to give us that tco vans. People don't buy on on just pure price. They're looking at the total cost of ownership, and they're looking at that total cost ownership for their workload. So the way we're attacking price is making sure that we have the configurations that are tailored to the workloads our customers are running, and our price to give them significant total cost of ownership advantage. with with the Genoa Genoa X, you know, our generacy Burgamo, siena. Is there other white space that you see in the data, in the data center? A-A Rita, for you guys to continue to expand a portfolio. Well, you mentioned General X. that that I and I-I didn't mention that in the variants, and I'll add that now. Uh, that's a version where we stack cash right on top of the CPU, and that really tailored to make high performance workloads, um, like EDA, or database workloads, even more, you know, T-C-O effective. So, yes, with uh, uh, we, we, we've covered, uh, the I'll say, the prime markets. When you start looking beyond the variants that we have now, you start getting tomorrow, I'll say, corner cases of the market. And again, we will, we will listen to our customers work. Clothes change over time. And particularly now you're seeing, again, AI come into as a workload affecting almost every kind of application. So we're building AI into each one of those variants, and that's to us, is the the white space that we're we're now coveringly started it with Fortune Epic with Genoa. And you'll see more and more AI capabilities in our world map as we go forward. That's perfect. So I'm going to maybe shift outside just the server side of the world. But, you know, one of the other things that A-M d's done is, you know, you have a GPUS strategy with instinct. You mentioned Microsoft up and running with with instinct instances. Uh, you've got enough pg A strategy with this eye links acid. They had some data center strategy there. You have Pinsando, which you bought, I think it was earlier this year for data, you know, DPS. Where? Where do you see, you know, when? When do we start to see maybe some of these other adjacent data center pieces of the portfolio? You know, how are you thinking about those materializing? The two acquisitions are in that you mentioned ZI links and Pinsanda were fundamental. I don't think people quite realize how important those acquisitions were in terms of rounding out andy portfolio. So when you think about what Zylings brought to bear, it is adaptive compute, which is inclusive the FBJS, but it is also where even more tailored solutions are needed. So it has embedded armed cores, uh, you know, higher performance, uh, armed course, has embedded accelerators. Along with that adaptive compute, along with networking capability. It brings to bear a very strong embedded track record with telecommunications, defense, UH, you know, a broad range of applications and a growing footprint in the data center. And with Consando, we have a programmable smart nick that's absolutely a leadership play. It's been, it's been adopted in in hyper scale, and it has 144 p four engines. P Four is a programming language now becoming the DEFECTO standard to allow microsurvices to come into the data center. And the PINSANDO offerings, now in its 2nd generation, is absolute leader and flexibility, being able to tailor these solutions, whether you need software to find storage, whether you need a firewall, a deep packet inspection capability, whether you need, you know, optimization of your of your flow offloading, uh uh, off loading capabilities from the CPU. All of these are examples of where, uh, the Smart nick can be deployed. And so, you know, these, these editions are really enabling us to deepen our footprint with our customers. And honestly, when you look at the trend, you're seeing the need for performance such that the CPUS, the accelerators and the connectivity, have to move in tandem to provide the kind of performance of that's needed going forward. Yep. Yep. Um. We touched one of the earlier in the architectural overview, But I want to double click on INFIDTED, the architecture a little bit, and maybe understand the the evolution of that, because it seems to be a key building block of the strategy, and probably remains the case going forward. How do we think about what you see in infinity architecture, as far as the evolution going forward? Does it ever becomes off chip interconnect? Is there more to be set around just just infinity in general? Yeah, when when you hear that term from A-M-D, you need to think about it. It's andy's scalability architecture. It it's what allows us to go from a one socket CPU to a two socket CPU and scale almost almost linearly. Why? Because that infinity architecture, connecting one socket to two socket is the same guts, the same technology guts. That's in a single chip implementation of our cps. So as we connect cpus on a single dye, and then we connect, that single will die to another die. Its same architecture, seamlessly, allowing you to scale. Then we extended that to GPU. When you connect a GPS, if you to see if you it's at same Infindy architecture, that same fundamental approach, allowing you to have very, very high bandwidth, high connectivity at low latency to allow you to scale CPD GPU that we do that, of course, in our client products, but with what we announced it, we've rolled out with our our next generation instinct that we're already already have, uh, you know, back in in the labs, or M-I 300. It is a true data center APU. It's a CPU and a GPU acceleration, which which is leveraging the infinity architecture to share the same memory fully coherently. It's all sharing a high band with memory. And so it goes beyond that. So it's not Now, when you think about what I commented on earlier with Chiplet, the Infinity architecture going forward will encompass Chiplet to Chiplin interconnect. We do that already today with our chiplets. We have over 40 chiplets in in production today, but we are found a member of Ucia. Ucia is the new standards can take a few years, but that standard has tremendous momentum, and it will create an ecosystem of how chiplets can put, be put together, and that allow us to even further help hyper scalers tailor their solutions, when you can put, you know, a hyper scalar accelerator together with our chips. And we have a so my custom division, we call S3, that's already stood up and working with our in customers to enable that kind of customization. So, infinni architecture, I was architected to be tremendously flexible, and in his absolutely key for us to hit the industry trends going forward. So so this might be, you know, not related. I'm not, I'm not quite sure, but I've written a lot our teams, written a lot about CXL as an interconnect and connecting, you know, big pools of memory and and, you know, other peripherals the CXL? Is CXL competitive with infinity architecture? Is it a compliment? I'm just curious how you see the parallels between what seems to be more and more kind of interconnects. Yeah, so those are armed to move. C-H shelves, connect, express link. And it's a standard that is, um, will actually, you know, be used as well as part of the whole chiplet interconnected. But it it allowed, um, a, in a standard way, for us to avoid bus wars in the industry of how we all attach it, will acceleration or memory extensions onto our compute complexes. And, uh, it's a great story of collaboration, by the way, because uh AMD led the way with zylinks, with arm, with others, and were going on a standard that was called CCX, and Intel was going another direction, and all of the parties came together, uh, and uh, antel had, uh, had uh, started, UH, the proposal c. x. L. And, you know, we looked at and said, look, if we can make this a level plane field, we can avoid bus wars in the industry, different standards that just compete each other and compete against each other and prohibit an ecosystem coming from together. And so we did bring that course consortium together very successfully. And in general, we do have support for CXL. We call one dot, one Plus. That's the stand under the 1st generation. And why we call it plus is we added the support for what's called type three memory pooling so on for gen Epic, our general system. Today, you can extend the memory and and add memory. By the way, you could, as I told you, genoa's DDR five, the latest memory. But let's say you want additional memory that's less expensive, you want to run DDR four, you can use c XL to have DDR four to extend your memory, and you can actually pool the memory across different notes using a CXL switch. And it's just the start. You're going to see CXL over multiple generations, create a whole ecosystem of accelerators and solutions. By the time we get to generation three of CXL, you're going to see it support even clustering solutions across the standard. So Mark, we've got 2 min left, and this is probably a longer discussion than just 2 min. But I'm just going to ask you, because I get the question left arm in the data center. How do you see arm, uh, architecture evolving competitively? And I know you mentioned Gramo as a competitor. Yeah, so it it's really, uh, people uh, get confused on this point. They think it's, you know, an arm versus **, 86 versus risk five. It's really all about the solution that you put together. The reason we had, we had, actually some of you recall, we had our roadmap. When you go back eight, nine years ago, had both arm and x 86 and a road map, and we de featured the arm in our seat, because the ecosystem still had too far to go. We we could have made that, you know, we we had a design approach that was going to make the custom armed design for AMD equally perform it to the x 86, but the ecosystem wasn't there, so we we kept our focus on x 86, and we said, let's watch space. And Arm Arm is now developing more of a of A-A robust ecosystem. And you know, we certainly, um, you know, a very straightforward statue keep our x 86 performance growing. As as such, it's a leadership capability. So if you want to tap the ecosystem that's most dominant out there, we want to have just absolute leadership and capability in TCO for the commons, uh, made earlier. But if someone has reasons that they want arm we have a custom group that, as three group I described earlier, and we're happy to work with them, UH, to implement in our base solution. We're not, UH, we're, we're not married to an I-S-A. We're married to getting our customers the absolute best solution and delivering incredible value to them. And again, important note versal within the Zylings asset had arm cores. It's very it's almost like a Swiss army knife of computer, multiple different pathos. Versa. Was arm based for not changing that. Uh, pansando Arctic is arm based for not changing that. Those are great examples, because those are tailor made applications that don't need that whole ecosystem. When you when you use a Zi links device, when you use a smart neck device, you don't need that ecosystem of applications, because it's a point application. It's a tailored application. Mark, thank you so much for joining us. Say, that's great. Great over.
 楼主| 发表于 2022-12-3 21:56 | 显示全部楼层
本帖最后由 埃律西昂 于 2022-12-4 07:33 编辑

修正版机翻:



非常感谢你,马克。谢谢你把我叫来。我非常兴奋能和你们讨论建筑方面的东西以及AMD在过去90年里发生的一切。我先从数据开始。2011年10月24日马克加入时,我给他发邮件说A-M-d的企业价值是40亿美元。昨晚股市收盘时,该股的企业价值为1160亿美元。我非常感谢你的成绩,出色的工作。我认为在很大程度上是由创新引擎驱动的,你知道,从公司的角度来看。也许我们应该先来快速概述一下你们对AMD如何执行产品路线图和愿景的看法。我们会讨论工程组织和规模以及它是如何扩展的,也许还会讨论一点重叠的路线图策略以及你想把它用到哪里。因为我认为才是AMD过去多年来故事的症结所在。,谢谢你。a-b故事的重要部分是我们的工程执行。但关键是要有清晰的愿景,清晰的目标。还有,你知道的,当丽莎和我被招募到a的时候,我几乎。11年,就在大约11年前,Lisa,在她成为CEO之前,她用咒语推动AMD回到持续执行。我们两个人的背景都是,我们在IBM共事多年,我们非常了解转型需要,转型转型的领导者和技术人员意味着你对自己的方向有清晰的愿景,你为如何实现些目标制定了清晰的方法和过程,你将商业目标排列起来。不只是工程,它必须是商业上的工程,它必须建立在一种文化的基础上。而且,你知道,那是,那是真正绝对关键的,你知道,财务结果是?当你回顾过去的十多年时,你总结了一下,你知道,对我们来说,有一些基本原则安迪必须有有竞争力的CPU。它与我们所做的一切都息息相关。我们,你知道,是,是公司的传统。我的意思是,AMD早期成功的原因。

第重点是写CPU路线图,十年前,我们开始了架构阶段,也后来的禅宗x86CPU家族。它不仅是有竞争力的处理器,而且是处理器家族。我们刚刚发布了第四代产品,你知道,今年年中,在客户端桌面。就在最近,我们的第四代史诗服务器。,你知道,你知道,设定明确的目标在基础上拥有领先的x86计算能力,但也要有愿景,你如何围绕它建设?怎样才能更轻松?我们从一开始就么做了。是很多人没有意识到的,他们都把Zen看作是催化剂,新的竞争和领导的cpu架构是催化剂,但在技术方面,同样的,是我们如何构建我们所谓的无限架构,所有的部分是如何组合在一起的,以及它们实际上是如何扩展的。是关键的框架。D-A-B收购了ATI并且拥有大量关于图像,视频加速,音频加速的ip。当你购买我们的笔记本电脑时,你会认为些元素是花岗岩,而所有些元素,你知道,是如此无缝地交织在一起。我们以后再谈吧。

我们现在已经在数据中心做了同样的事情,通过我们的CPU和GPU,自适应计算,但是,但是,我们在十多年前就奠定了基础,当我们开始构建无限架构时,你知道,半导体的问题是人们没有意识到在软件中,你可以改变方向。你可以调用玩法,你可以非常非常快地执行。在六个月到一年的时间里,你可以有新的方向。但在半导体领域,需要更长时间。设定新方向需要四五年的时间。,我们马上就制定了些新方向。但是种执行文化产生了立竿见影的效果。把一点付诸实践让我们赢得了游戏机在早期是很关键的,让我们重振了我们的图形路线图并进入了整个公司的执行文化当你现在看到它时,你只是看就在过去的五年里,对我们来说是巨大的区别。而且,你知道,当Lisa成为CEO时,她真的激励了整个公司围绕执行的文化,围绕倾听客户的文化。我们要确保我知道我们的目标是客户所需要的真正的卓越和质量。你真的是你的态度的基本支柱。作为很好的概括标志,当我们想到a-m-d的路线图,执行,禅宗架构,真正全身心投入到以芯片为基础的架构中而不是,你知道,历史上的行业更同质化的芯片架构。你知道,当你考虑路线图时,总是在考虑未来4到5年,你认为我们能走多远?你知道,在情况下我们必须考虑除芯片之外的另一种新的架构方法?在你的思考过程中处于位置?我建议大家思考的方式是创新总能绕过障碍。

摩尔定律相关
你们都听说过很多次摩尔定律变慢了。摩尔定律已经死了。是意思?并不是说不会出现令人兴奋的晶体管新技术。事实上,我可以预见到令人兴奋的新晶体管技术,你知道,只要你能把些事情真正地规划出来,,你知道,,6年,8年。对我来说,是非常非常清楚的进步,我们将继续,你知道,改进晶体管技术。但是它们更贵。以前的摩尔定律是,密度可以翻倍,每18到24个月,但你会保持在相同的成本区间。现在情况不同了,我们要在过渡性技术上进行创新。我们会有更大的密度,我们会有更低的功率,但它的成本更高,你把解决方案放在一起的方式必须改变。我们确实预见到了一点,也是我们刚刚谈到的发明架构的动机之一,因为它允许我们非常模块化以及我们如何设计每个元素,让我们能够利用Chiplets。《ChipletsUh》真的是一种重新思考半导体行业如何发展的方式。还有很多创新有待进行,因为将成为解决方案如何组合在一起的新点。它曾经是主板,你把所有些街头元素放在主板上,是让创新持续下去?我会说,是摩尔定律的等效物,意味着你会继续,你知道,每18到24个月,能力翻一番。是围绕解决方案如何组合的创新。它是异质的。你不会是同质的,你必须使用加速器,GPU加速,专门的函数,自适应计算,就像我们收购Xilinx,在今年2月关闭。因此,些因素必须结合在一起。你如何整合它你将会看到巨大的创新如何将它们结合在一起。真的会让我们跟上进度。我们必须么做。因为你可以看看计算的需求。他们连一丁点儿也没卖。

ISA:(?)
事实上,随着人工智能变得越来越普遍,重新调整的速度很快。作为那颗彗星的侧边栏,你显然在超大规模的云中取得了巨大的成功。你知道,客户那些今天来到AMD的云客户,他们说,看,你知道,我们过去使用的是,x86那种通用计算机。但是他们越来越多地要求你优化计算平台。你提到了异构计算,还有更多的,具体的设计,架构方面的东西他们和A-M-D一起做,你知道,优化数据中心的性能和能源效率。绝对是一种趋势。当我再次想起,当我开始的时候,就在十多年前,我和当时最大的超大规模云服务的基础设施负责人交谈,领导告诉我,马克,我们将是同质的。我们不会改变它。我们提高效率的方法,我们将有,a-a的家族,你知道,在我们的数据中心有cpu。但出于些原因,我只是坐了一会儿以前,所有的数据中心都发生了变化,因为您无法跟上计算需求的步伐。如果你有,你知道,只有,你知道,x86的方法,你需要口味。x86是显性的吗?ISA架构现在已经出现了,它是最容易采用的。,

没有。不一定是x86。我们可以回到那是在一瞬间。但是,我们已经开始定制了。当你看到我们超大规模的安装时,我们已经在追赶他们的工作量了。是图像识别吗?是搜索吗?它是EDA电子设计自动化需要高频供应吗?因此,您今天仅在CPU上查看我们的实例,您将看到许多变体和更多变体。我们将讨论Bergamo,我们的密集核心与,你知道的,较小的手臂核心,在那里你只需要进行处理。些都是量身定制的适应,我们用超尺度来工作,因为我们倾听,因为他们告诉我们他们需要才能有成本效益的解决方案。你会看到越来越多的加速器加入其中。微软宣布他们有我们的本能,我们的GP加速,现在开始运行。你是来训练他们的,是的,那太棒了。当然,在剩下的18分钟里,我们会试着讲一些。

服务器:
你知道,我认为最近最令人兴奋的是服务器市场持续的势头。你最近推出了,你知道,Genoa架构的一部分,你知道,也许也许带我们了解一下,总的来说,有哪些关键的建筑元素是,你知道,让你感到兴奋的。我最终想说的是,你们是如何扩大你们的能力来应对服务器市场的?因为我认为是AMD故事中被低估的元素,只是扩展能力,你会对产品组合感到惊讶。

是的,是一份很棒的问卷,如果可以的话,我把它分为两部分。因为首先让我谈谈,呃,一般情况。呃。我们再一次为将军感到骄傲。我们试着去倾听顾客的声音。他们不需要营销。他们只想要总拥有成本优势,热那亚确实做到了一点。它及时地提供了它,因为当你看到那里的服务器舰队时,人们,你知道,有主要的刷新周期即将到来。因此,如果你从整个企业的超大规模来看it运营商,他们正在寻求真正提高他们的总拥有成本。通常情况下,在实现全部计算需求的方式上,您实际上受到了限制。你在寻找经济增长,通用公司所做的利用我们把CPU从7纳米移动到5纳米的事实。它处于前沿。吻约五纳米。还记得我之前说过的话吗?新的晶体管仍然能给你带来更大的密度,还有,你知道的,还有更好的性能,有用?呃?我们把5纳米的处理器和我们的设计技术结合起来。从设计和技术的角度来看,我们与台积电紧密合作,我们的计算效率提高了48%。是每瓦特性能上的巨大进步。为我们能够从插槽64个四分之一的芯片发展到插槽96个核。是第元素,它驱动了非常非常强大的计算。就原始的核心能力而言,是我们代人最大的收获。但我们的客户也需要平衡计算。只有当你给它喂食的时候你就有了伊俄涅的记忆。我们跳到PCIe Gen5。我们把IO带宽增加了一倍,从DDR4到DDR5,是最新的内存,运行得更快。你,你,我们从8CH内存增加到12CH内存。有很大的内存带宽,我欠带宽。为我们能跳到更好的分数。利用台积电的能源效率。我们把爱荷华州的内存保留在旧的,更经济的节点上,它保持,你知道,保持成本在控制之中。又是我们的建筑芯片。我们在单一的解决方案中有不同的技术节点,结果,你知道,为客户带来了大量的利益。你问题的第二部分没错,我们要延长时间。,你知道,当你有产品,我们能够做的是,我们提供热那亚坐在我们的第三代Eypc米兰之上,因为米兰仍然是服务器市场上的领先处理器。我们有从上到下的堆栈,现在是不可思议的覆盖范围,有一种粒度,我们的客户需要真正覆盖超大规模的企业。

明年上半年,我们会加入一种叫做贝加莫的东西,它会和我们的锌。我们增加了CPU团队的数量,我们增加了版本的Zen,因为它运行的代码和热那亚一样,但它的规模只有一半。与基于引力子和手臂的解决方案正面竞争,在那里你不需要峰值频率,你运行的是工作服,像Java工作服通过步行工作服不需要运行峰值频率,但你需要很多核心。我们在23年的上半叶添加了它,在2020年晚些时候,我们添加了锡耶纳,是针对电信领域的变体。

我们对我们的时间增长感到非常非常兴奋。还有,我得到的问题是,当你很明显地在服务器市场上获得了份额,取得了性能领先地位并继续在此基础上发展,你知道,我经常得到的问题是,你知道,定价是如何在竞争格局中发挥作用的?而且,你知道,总是有一种担心,嘿,竞争对手会变得更有侵略性,会有更多的Aem出现。你是如何看待定价因素在你的策略中的作用的?

从服务器的角度来说,是个好问题。那是,呃,真的,呃,你知道,在时间膨胀中,呃,驱动力之一。你所看到的是,市场正在急剧增长,它正在吸引新的竞争者。每个人都在寻找自己的细分市场,你要提供的,你知道,如果你能提供细分市场,你真的可以为特定的工作量量身定做,你就可以放弃其他工作量不需要的电路,,你知道,有更经济的解决方案。Aaron真的是我们扩大产品供应的推动力。
第一,我之前说过的,定位,总体上,在米兰的巅峰表现下。让我们在价格上有了很大的灵活性。来自热那亚,来自史诗堡。包括第三代Epyc。但是,再次,随着贝加莫在2023年上半年的高密度核心到来,也打算给我们tco货车。人们买东西不只是单纯的价格。他们看的是总拥有成本,他们看的是他们工作负载的总拥有成本。因此,我们在价格上的策略是确保我们的配置是为客户的工作负载量身定制的,我们的价格是为他们提供显著的总拥有成本优势。与热那亚热那亚X,你知道,我们的一代贝加莫,锡耶纳。在数据中心中,还有其他空白吗?

a丽塔,让你们继续扩大投资组合。好吧,你提到了x将军我和我在版本中没有提到过,现在我要补充一下。那是我们把缓存堆在CPU上面的版本,它真的是为高性能工作量身定做的,像EDA或数据库工作负载,甚至更有效。是的,我们已经覆盖了,呃,我说,黄金市场。当你开始超越我们现在所拥有的变量时,你就会看到明天,我想说,市场的角落案例。再说一次,我们会倾听客户的工作。衣服会随着时间而变化。特别是现在,你会再次看到,人工智能成为了几乎每一种应用程序的工作负载。我们将AI构建到每变体中,对我们来说,空白我们现在开始用财富史诗和热那亚。随着我们的发展,你会在我们的世界地图上看到越来越多的人工智能能力。是完美的。

对Pensando和Xilinx的收购与混合架构:
我要把它移到服务器端之外。但是,你知道,AMD所做的另一件事是,你知道,你有gpu策略与本能。你提到了微软的运行和直觉实例。你已经有足够的pgA策略了就凭眼睛链接酸。他们有一些数据中心的策略。你买了Pensando,我想是今年早些时候买的数据,你知道,DPS。在哪里?你在哪里,时候看到的?我们时候开始看到其他邻边呢投资组合中的数据中心部件?你怎么看待些现实呢?

你提到Xilinx和Pensando是最基本的。我认为人们并没有意识到这些收购对于完善AMD的投资组合有多么重要。当你想到Xilinx带来的是自适应计算,它包含了FBJS,但它也是需要更多定制解决方案的地方。它有嵌入式武装核心,你知道,更高的性能,呃,武装课程,有嵌入式加速器。随着适应性计算,以及网络能力。它带来了非常强大的嵌入式记录,在电信,国防,呃,你知道的,广泛的应用程序和在数据中心不断增长的足迹。有了康桑多,我们就有了可编程的智能刻痕绝对是领导力的游戏。它已经被大规模采用,它有144 P4引擎。P4是一种编程语言,现在正成为允许微服务进入数据中心的DEFECTO标准。Pensando产品,现在是它的第二代,是绝对的领导者和灵活性,能够定制些解决方案,无论你需要软件来寻找存储,是否你需要防火墙,深度数据包检测能力,是否你需要,你知道,优化你的流卸载,从CPU卸载能力。这些都是Smartlink可以使用的例子。你知道,这些版本真的使我们能够加深我们与客户的关系。

说实话,当你观察趋势时,你会看到对性能的需求,比如cpu、加速器和连接,必须同步前进,以提供前进所需要的性能。是的。是的。我们在架构概述中提到了前面的,但是我想双击一下异教徒,架构,也许可以理解它的发展,因为它似乎是策略的关键构建块,在未来仍然是样。我们如何看待你在无限建筑中看到的东西,就向前发展而言?它曾经成为芯片外互连吗?在一般的无穷大周围是否有更多的值?是,当当你从AMD到词的时候,你需要考虑一下。是AMD的可扩展性架构。它能让我们从CPU插槽扩展到两个CPU插槽并且几乎是线性扩展的。为?因为无限的架构,连接插座到两个插座是同样的原理,同样的技术原理。那是在我们cps的单芯片实现中。当我们把cpu连接到染料上,再连接,染料就会变成另染料。同样的架构,无缝衔接,允许你扩展。我们将其扩展到GPU。当你连接GPS,如果你看到它在infinity architecture相同,相同的基本方法,允许你有非常,非常高的带宽,高连通性低延迟允许你规模CPDGPU,我们样做,当然,在我们的客户的产品。

但随着我们宣布,我们推出了我们的下一代的本能,我们已经已经有了,呃,你知道,回到实验室,MI300。它是真正的数据中心APU。它是CPU和GPU加速,利用无限架构来完全一致地共享相同的内存。它们都和记忆共享高频段。它不止于此。它不是现在。

UCIe与CXL:
当你想到我之前对Chiplet的评论时,未来的无限架构将包括Chiplet到Chiplin的互连。我们今天已经用我们的小芯片做了。我们今天有超过40个芯片在生产中,但我们被发现是UCIe的成员。Ucia是一种新的标准,需要几年的时间,但标准有巨大的动力,它将创建芯片如何组合的生态系统,让我们可以进一步帮助超标量加速器定制他们的解决方案,当你可以把超标量加速器和我们的芯片放在一起时。

我们有我的定制部门,我们叫S3,它已经成立了并且和我们的内部客户合作来实现这种定制,infinity architecture,非常灵活,是我们紧跟行业发展趋势的关键。,你知道,没有关系。

我不是很确定,但我写了很多关于我们的团队,写了很多关于CXL作为互连器连接大的内存池以及其他外设CXL?CXL与无限架构有竞争吗?是赞美吗?我只是好奇你是如何看待些似乎越来越多的相互联系之间的相似之处的。是,这些武器可以移动。C-H货架,连接,快递链接。是标准,,实际上,你知道,作为整个芯片互连的一部分也会被使用。但是它允许,,以一种标准的方式,让我们避免在我们如何连接它的行业中的总线战争,将加速或内存扩展到我们的计算复合系统上。顺便说一下,是伟大的合作故事,因为AMD和Xilinx,arm,和其他公司走在了前面,他们正在制定叫做CCX的标准,而英特尔在走另方向,所有各方都走到了一起,antel已经,呃,已经开始了,呃,提案c。,你知道,我们看着并说,看,如果我们能把变成水平的平面领域,我们就能避免行业的巴士大战,避免不同的标准互相竞争,互相竞争,禁止生态系统从一起产生。我们成功地把课程联盟组织在了一起。一般来说,我们确实支持CXL。我们称之为CXL 1 plus。我们称它为plus的原因是我们增加了对所谓的第三类型内存池的支持比如Epyc世代,我们的通用系统。今天,你可以扩展记忆和增加记忆。顺便说一下,就像我说的,你可以,热那亚的DDR5,最新的记忆。但是假设您想要更便宜的额外内存,您想要运行DDR4,您可以使用cXL来拥有DDR4来扩展您的内存,并且您实际上可以使用CXL交换机跨不同的音符池内存。而仅仅是个开始。您将看到CXL跨越几代人,创建加速器和解决方案的完整生态系统。当我们到达CXL的第三代时,您将看到它甚至支持跨标准的集群解决方案。

Arm相关
马克,我们还有两分钟,讨论比两分钟要长。但我只是想问你,因为我在数据中心得到了问题。你如何看待Arm结构的竞争性发展?我知道你提到格拉莫是你的竞争对手。是的,它真的,人们在一点上很困惑。他们认为,你知道,手臂 vs x86 vs SiFive。真正重要的是你把解决方案放在一起。我们有,实际上你们有些人记得,我们有我们的路线图。当你回到八、九年前,我们有Arm,x86和路线图,我们把Arm放在我们的座位上,因为生态系统还有很长的路要走。我们本来可以做到的,我们有一种设计方法可以让AMD的自定义x86设计同样适用于Arm,但是生态系统不存在,我们把注意力集中在x86上,我们说,让我们关注空间。Arm现在正在发展更强大的生态系统。你知道,我们当然,你知道,非常直接的雕像使我们的x86表现不断增长。就其本身而言,是一种领导能力。因此,如果你想要利用最具主导地位的生态系统,我们希望在TCO中拥有绝对的领导力和能力为公地,呃,早些时候做的。但如果有人有想要手臂的理由我们有定制组,就像我之前描述的三个组,我们很高兴与他们合作,UH,在我们的基础解决方案中实现。我们不是,呃,我们不是,我们不是嫁给ISA。我们致力于为客户提供绝对最好的解决方案,并为他们提供不可思议的价值。重要的是,Xilinx的资产中有arm核心。它就像电脑的瑞士军刀,有多种不同的哀愁。。为了不改变一点。北极潘桑多为了不改变一点。些都是很好的例子,因为它们是量身定制的应用,不需要整个生态系统。当你使用Xilinx设备时,当你使用智能颈部设备时,你不需要应用的生态系统,因为它是点应用。是量身定制的应用程序。

马克,非常感谢你加入我们。嘿,太好了。伟大的结束。
发表于 2022-12-4 04:23 | 显示全部楼层
11。6亿美元 => 1160亿美元
您需要登录后才可以回帖 登录 | 加入我们

本版积分规则

Archiver|手机版|小黑屋|Chiphell ( 沪ICP备12027953号-5 )沪公网备310112100042806 上海市互联网违法与不良信息举报中心

GMT+8, 2024-11-6 06:33 , Processed in 0.015265 second(s), 7 queries , Gzip On, Redis On.

Powered by Discuz! X3.5 Licensed

© 2007-2024 Chiphell.com All rights reserved.

快速回复 返回顶部 返回列表