Sunday, May 11, 2025

NVIDIA AI Summit Fireside Chat with Jensen Huang and Masayoshi Son

NVIDIA AI Summit Fireside Chat with Jensen Huang and Masayoshi Son Maginative 2.23K subscribers Subscribe Like Share Download Clip Save 23,546 views Nov 17, 2024 At NVIDIA AI Summit Japan, Jensen Huang and Masayoshi Son sat down for a fireside chat to discuss their latest partnership and how sovereign AI fosters innovation and strengthens Japan’s technological independence. They also joked about how SoftBank was once NVIDIA's largest shareholder, when Son offered to lend Huang money to buy NVIDIA outright, and the failed ARM merger. Chapters View all Transcript Follow along using the transcript. Show transcript Maginative 2.23K subscribers Videos About 22 Comments rongmaw lin Add a comment... @kurtdobson 5 months ago I started a high speed modem company in 1988, and Masayoshi (SOFTBANK) was my investor. Crazy smart guy and a pleasure to work with. 16 Reply @irsanadira7144 5 months ago Masayoshi Son and Jensen Huang indeed have profound foresight and understanding of AI’s trajectory. Both are aware of the risks, often advocating for ethical AI development and governance. Your concern is valid—global collaboration is essential to ensure AI remains under human control. If not, the unchecked development of AI could lead to unintended consequences, loss of control, and significant risks to society. Action must focus on regulation, transparency, and oversight to align AI with humanity’s best interests. 15 Reply 1 reply @akimikan85 5 months ago Hope Mr.Jensen Huang and Mr. Masayoshi Son gonna create incredible value for us! 8 Reply @tonymok9676 5 months ago Thank you so much for uploading the full video. 7 Reply @MichaelHeinz4 5 months ago Masayoshi Son is so special, a wise man with a very long perspective re. the trajectory of (the future) computing technology - particularly at different stages of the revolutionary IT development... Jensen gives him a sympathetic, heartfelt, and warm welcome on stage - in this age of "strong" AI (and Nvidia-dominance...). Where will all that lead - I am worrying: do both have a strong sense of the dangers? I believe action on a global level is urgently needed so that all AI systems under development are managed/controlled to ensure humans stay in control, all the time! 3 Reply @MrSinghSAmit 2 months ago I love the candor and the insight in this talk Reply @satoriasimov9169 5 months ago What a wholesome conversation 3 Reply @mindaza0 4 months ago in short japan agreed to pay any price to nvidia, because they have no choice 1 Reply @nicksda007 4 months ago Clearly Masa Son is just business man. Reply @kevinlam6299 2 weeks ago be a great and kind Reply @ibec69 5 months ago He let a scammer squander his billions. How’s that 100% record? 2 Reply 1 reply @ben8718 2 months ago Masa just hunt unicorns for days, he is a unicorn hunter. Reply @nicksda007 4 months ago All dramatics work when u emergency from big losses and somebody gives u an anchor ⚓ Reply @nvq.4837 4 months ago 22:20 Jensen is funny man lmao 3 Reply @blairwebster7832 1 month ago Digistak…. Name for newest desktop by nvidia Reply @vitalisemanuelsetiawan1678 5 months ago mr son basically want to make doraemon 2 Reply @edmsh5988 2 days ago How about Masa’s statement for a personal AI agent for every person from birth. But then they say that all that personal data belong to the country! 😂😂 Reply @nimaahmadi-s7o 5 months ago Bro showed up drunk af 1 Reply @CoraYe-n3h 5 months ago domain expertise and data 1 Reply

AI and The Next Computing Platforms With Jensen Huang and Mark Zuckerberg

AI and The Next Computing Platforms With Jensen Huang and Mark Zuckerberg NVIDIA 1.93M subscribers Subscribe 23K Share Download Clip Save 3,895,692 views Jul 29, 2024 NVIDIA founder and CEO Jensen Huang and Meta founder and CEO Mark Zuckerberg discuss how fundamental research is enabling AI breakthroughs, and how generative AI and open-source software will empower developers and creators. They also discuss the role of generative AI in building virtual worlds, and the potential of virtual worlds for building the next wave of AI and robots. Transcript Follow along using the transcript. Show transcript Transcript Search in video 0:01 Ladies and gentlemen, I have a very special guest. But could I ask everybody to sit down? 0:06 We're about to get started. My next, my next guest. 0:11 I am so impressed by this person. Three reasons. First reason 0:18 is there are only a handful of entrepreneurs, founders 0:25 that started a company that literally touched the lives of billions of people around the world as part of the social fabric, 0:35 invented services, and a state-of-the-art computing company. 0:41 Two. Very few entrepreneurs, founders, founded the company and led it to over 0:47 $1 trillion of value. And three, a college dropout. 0:56 All three things simultaneously true. Ladies and gentlemen, please help me welcome Mark Zuckerberg. 1:12 How's it going? Welcome. Mark, welcome to your first Siggraph. 1:18 All right. Can you believe this? One of the pioneers of computing. 1:24 A driver of modern computing. And I had to invite him to Siggraph. 1:30 So, anyways, Mark, sit down. It's great to have you here. Welcome. Thanks for flying down. Yeah. No, this will be fun. 1:37 I hear you’ve been going for, like, five hours already or something. Well, yeah, sure. 1:43 This is Siggraph. You know, there's 90% PhDs. And so the thing that's really great about Siggraph, as you know, 1:51 this is this is the show of computer graphics, image processing, 1:57 artificial intelligence and robotics combined. And some of the some of the companies that over the years have demonstrated 2:04 and revealed amazing things here from Disney, Pixar, 2:10 Adobe, Epic Games. And of course, you know, NVIDIA. We've done a lot of work here. 2:16 This year we introduced 20 papers at the intersection of artificial intelligence and simulation. 2:24 So we're using artificial intelligence to do, help simulation, 2:29 be way larger scale, way faster. For example, differentiable physics. 2:34 we're using simulation to create, simulation environments for synthetic data generation, for artificial intelligence. 2:41 And so these two areas are really coming together. We’re really proud of the work that we've done here. At Meta, 2:48 you guys have done amazing AI work. I mean, one of the things that, that I find amusing is, when the press 2:56 writes about how Meta has jumped into AI this last couple of years, as if, 3:02 you know, the work that the FAIR has done. remember, we all use PyTorch, that comes out of Meta, 3:09 the work that you do in computer vision the work in language models, real-time translation. 3:16 groundbreaking work. I guess my first question for you is, how do you see how the, 3:22 the advances of generative AI at Meta today? And how do you apply it to either enhance your operations 3:31 or introduce new capabilities that you're offering? Yeah. So a lot to unpack there. 3:38 First of all, really happy to be here. you know, Meta has done a lot of work and, has been at Siggraph for, 3:44 you know, eight years. So, I mean, it's a, you know, we're noobs compared to you guys. But, I know, I think it was back in in 2018. 3:51 You're dressed right, but this is my hood. I just, you know, it's I mean, well, thank you for welcoming me to your hood. 3:58 I think it was back in 2018. We showed the some of the early hand-tracking work 4:03 for our VR and mixed reality headsets. You know, I think we've talked a bunch about the progress that we're making on 4:10 codec avatars, the photorealistic avatars that we want to be able to drive from consumer headsets, which we're getting closer and closer to, 4:18 so pretty excited about that. And also, a lot of the display systems work that we've done. So, some of the future prototypes and research for getting 4:26 the mixed reality headsets to be able to be really thin with, like with just, 4:32 pretty advanced optical stacks and display systems, the integrated system - I mean that's stuff that we've typically 4:40 shown here first. So, excited to be here. You know, this year not just talking about the metaverse stuff, but also, 4:48 all the AI pieces, which, as you said, I mean, we started FAIR, the AI research center. 4:54 you know, back then it was Facebook. Now, Meta. Before we started Reality Labs. I mean, we've been at this for for a while. 5:01 All the stuff around gen AI, it's an interesting revolution. 5:07 And I think that it's going to end up making, 5:13 I think all of the different products that we do, you know, different in interesting ways. 5:19 I mean, I kind of go through - you can look at the big product lines that we have already, so things like the feed and recommendation systems and Instagram 5:28 and Facebook and we've kind of been on this journey where that's gone from just being about connecting with your friends 5:35 and, the ranking was always important because even when you were just, you know, following friends, you know, if someone did something really important, 5:43 like your cousin had a baby or something, it's like, you want that at the top. You'd be pretty angry at us if we, you know, it was buried somewhere down in your feed. 5:49 So the ranking was important. But now, over the last few years, it's gotten to a point where more of that stuff 5:56 Is just different public content that's out there. The recommendation systems are super important because now instead of 6:02 just a few hundred or thousand potential candidate posts from friends, 6:07 there's millions of pieces of content and that turns into a really interesting recommendation problem. 6:13 And with generative AI, I think we're going to quickly move into the zone 6:18 where not only is is the majority of the content that you see today on Instagram, 6:25 just recommended to you from the kind of stuff that's out there in the world that matches your interests, whether or not you follow the people. 6:30 I think in the future, a lot of this stuff is going to be created with these tools, too. Some of that is going to be creators using the tools to create new content. 6:38 Some of it, I think, eventually is going to be content that's either created on the fly for you, 6:43 or kind of pulled together and synthesized through different things that are out there. So that that's just one example of how 6:50 kind of the core part of what we're doing is just going to evolve. And it's been evolving for for 20 years already. Well very few people realize that 6:57 one of the largest computing systems the world has ever conceived of is a recommender system. 7:03 I mean, it's this whole yeah, it's this whole different path. Right? It's not quite the kind of gen AI hotness that people talk about, 7:09 but I think it's all the transformer architectures. And it's a similar thing of just building up more and more general models 7:16 Embedding, embedding unstructured data into features. Yeah. I mean, one of the big things that just drives quality improvements 7:23 is, you know, it used to be that you'd have a different model for each type of content, right? 7:29 So a recent example is, you know, we had, you know, one model for ranking and recommending reels 7:34 and another model for ranking and recommending more long form videos. And then, you know, take some product work to basically make it 7:40 so that the system can display, you know, anything in line. But, you know, the more you kind of just create more general recommendation models that can span everything, 7:48 it just gets better and better. I mean, part of it, I think, is just like economics and liquidity of content 7:54 and the broader of a pool that you can pull from, you're just not having these weird inefficiencies of pulling from different pools. 8:00 But yeah, I mean, as the models get bigger and more general, that gets better and better. So I kind of dream of one day, 8:07 you can almost imagine all of Facebook or Instagram being, you know, like a single AI model that is unified, 8:14 all these different content types and systems together that actually have different objectives over different time frames. Right. 8:19 Because some of it is just showing you, you know, what's the interesting content that you're going to be that, that you want to see today, 8:24 but some of it is helping you build out your network over the long term. Right? People you may know or accounts you might want to follow. 8:30 And these these multi-modal models tend to be, tend to be much better at recognizing patterns, weak signals and such. 8:38 And so one of the things that people people, you know, it's so interesting that AI has been so deep in your company, you've been building GPU infrastructure, 8:48 running these large recommender systems for a long time. We’re a little slow on it actually, getting to GPUs. 8:56 Yeah, I was trying to be nice. I know. Well, you know too nice. I was trying to be nice. 9:01 You know, you’re my guest. When I was backstage before I came on here, you were talking about, like, owning your mistakes or something, right? So 9:09 You don't have to volunteer it out of the blue. I think this one has been well tried. 9:14 Yeah, it's like I got raked over the coals for it. As soon as you got into it, you got into it strong. 9:20 Let's just put there you go, there you go. Now, the thing that's really cool about, about generative AI is these days when I use WhatsApp, 9:29 I feel like I'm collaborating with WhatsApp. I love Imagine. I'm sitting here typing and it's generating the images 9:34 as I'm going. I go back and I change my words. It's generating other images. Yeah. You know, and so the one that 9:43 old Chinese guy, enjoying a glass of whiskey at sundown 9:51 with three dogs: a Golden Retriever, a Goldendoodle and a Bernese Mountain dog. 9:56 And it generates, you know, a pretty good-looking picture. Yeah. Yeah, we're getting there. 10:02 And then now you could actually load my picture in there and it’ll actually be me. Yeah. That's as of last week. Yeah. Yeah. Super excited about that. 10:08 Now imagine me. Yeah. Now I'm spending a lot of time with my daughters imagining them as mermaids and things over the last, over the last week, it's 10:15 been it's been a lot of fun. But yeah, I mean, that's that's the other half of it. I mean, a lot of the gen AI stuff is going to, on the one hand 10:22 it’s I think going to just be this big upgrade for all of the workflows and products that we've had for a long time. But on the other hand, there's going to be all these 10:28 completely new things that can now get created. So Meta AI you know, the idea of having, you know, just an AI assistant 10:35 that can help you with different tasks and, in our world is going to be, you know, very creatively oriented, like you're saying. But um. 10:44 I mean, they're very general, so you don't need to just constrain it to that. It'll be able to answer any question. Over time, I think, you know, when we move from 10:51 like the Llama 3 class of models to Llama 4 and beyond, it's, 10:56 it's going to, I think, feel less like a chat bot where it's like you, you give it a prompt and it just responds. 11:04 Then you give it a prompt and it responds, and it's just like back and forth. I think it's going to pretty quickly evolve to, you give it an intent 11:12 and it actually can go away on multiple time frames. And I mean, it probably should acknowledge that you gave it an intent up front. 11:18 But I mean, some of the stuff I think will end up, you know it’ll spin up, you know, compute jobs that take, you know, weeks or months or something 11:25 and then just come back to you and like, something happens in the world. And I think that that's going to be really powerful. 11:31 Today's AI, as you know, is kind of turn-based, you say something, it says something back to you. 11:37 But obviously when we think, when we're given a mission or we're giving a problem, you know, we'll contemplate multiple options. 11:45 Or maybe we come up with a, you know, a tree of options, a decision tree, and we walk down the decision tree simulating in our mind, 11:52 you know, what are the different outcomes of each decision that we could potentially make. And so we're doing planning. 11:58 And so in the future AI's will kind of do the same. One of the things that that I was super excited about 12:04 when you talked about your vision of creator AI, I just think that's a home run idea, frankly. 12:11 Tell everybody about the creator AI and AI studio that's going to enable you to do that. Yeah, so we actually I mean, this is something that we're 12:17 we've talked about it a bit, but we're rolling it out a lot wider today. 12:23 You know, a lot of our vision is that I don't think that there's just going be like one AI model, right? I mean, this is something that some of the other companies 12:30 in the industry, they're like, you know, it's like they're building like one central agent and yeah, we'll have the Meta AI assistant that you can use. 12:37 But a lot of our vision is that we want to empower all the people who use our products to basically create agents for themselves. 12:45 So whether that's, you know, all the many, many millions of creators that are on the platform or, you know, hundreds of millions of small businesses, 12:52 we eventually want to just be able to pull in all your content and very quickly stand up a business agent and be able to interact 12:59 with your customers and, you know, do sales and customer support and all that. So, the one that we're that we're just starting to roll out more now is, 13:07 we call it AI Studio. And it basically is, a set of tools that eventually is going to make it 13:12 so that every creator can build sort of an AI version of themselves, as sort of an agent or an assistant that their community can interact with. 13:22 There's kind of a fundamental issue here where there's just not enough hours in the day. Right? 13:27 It’s like if you're a creator, you want to engage more with your community, but you're constrained on time. 13:34 And similarly, your community wants to engage with you, but it's tough. I mean, there's just limited time to do that. 13:40 So the next best thing is allowing people to basically create these artifacts. 13:46 Right? It's an agent, but it's you train it on your material 13:54 to represent you in the way that you want. I think it's a very kind of creative endeavor, almost like a, like a piece of art or content that you're putting out there. 14:02 No, it's to be very clear that it's not engaging with the creator themselves. But I think it'll be another interesting way, just like how creators put out content 14:09 on, on these, social systems, to be able to have agents that do that. Similarly, I think that there's going to be a thing 14:17 where people basically create their own agents for all different kinds of uses. Some will be sort of customized utility, things that they're trying to get done 14:26 that they want to fine tune and and train an agent for, and some of them will be entertainment. And some of the things that people create are just funny, 14:32 you know, and just kind of silly in different ways. Or kind of have a funny attitude about things that, 14:38 you know, we probably couldn't we probably wouldn't build into Meta AI as an assistant, but I think people 14:45 people are kind of pretty interested to see, and interact with. And then one of the interesting use cases that we're seeing is people 14:52 kind of using these agents for support. This was one thing that was a little bit 14:58 surprising to me is one of the top use cases for Meta AI already is 15:03 people basically using it to role play difficult social situations that they're going to be in. So whether it's a professional situation, it's like, all right, 15:10 I want to ask my manager, like, how do I get a promotion or raise? Or I'm having this fight with my friend, or I'm having this difficult situation 15:18 with my girlfriend. Like, how can this conversation go? And basically having a like a completely judgment-free 15:26 zone where you can basically role play that and see how the conversation will go and get feedback on it. 15:34 But a lot of people, they don't just want to interact with the same agent, whether it's Meta 15:39 AI or ChatGPT or whatever it is that everyone else is using, they want to kind of create their own thing. So that's roughly where we're going with AI studio. 15:47 But it's all part of this bigger, I guess, view that we have, that there shouldn't just be one big AI that people interact with. 15:56 We just think that the world will be better and more interesting if there's a diversity of these different things. I just think it's so cool that if you're an artist 16:03 and you have a style, you could take your style, all of your body of work, you could fine tune 16:08 one of your models. And now this becomes an AI model that you can come and you could prompt it. 16:14 You could ask me to create something along the lines of the art style that I have, 16:20 and you might even give me a piece of art as a drawing, a sketch, as an inspiration. And I can generate something for you. 16:27 And you come to my bot for that, come to my AI for that. 16:33 It could be, every single restaurant, every single website will probably in the future have these AIs. 16:42 Yeah I mean, I kind of think that in the future, just like every business has, 16:47 you know, an email address and a website and a social media account or several. I think in the future, every business is going to have an AI 16:55 agent that interfaces with their customers. And some of these things, I think have been pretty hard to do historically. Like, if you think about any company, it's like you probably have customer 17:03 support as just a separate organization from sales, and that's not really how you'd want it to work as CEO. 17:10 It's just that, okay, they're kind of different skills. You're building up these- I'm your customer support just so you know. 17:15 Yeah. Well, apparently I am. Whenever Mark needs something. 17:21 I can't tell whether it’s his chat bot or it's just Mark, but… It just was my chat bot here, just asking here. 17:28 Well, I guess that's kind of, when you're CEO, you have to do all this stuff. But, I mean, then when you build the abstraction in your organization, a lot of times, like the, 17:35 you know, in general the organizations are separate because they're kind of optimized for different things. But I think, 17:40 like the platonic ideal of this would be that it's kind of one thing, right? As a, you know, as a customer, you don't really care. 17:48 You know, you don't want to have a different route when you're trying to buy something versus if you're having an issue with something that you bought, you just want to have a place that you can go 17:55 and get your questions answered and be able to engage with the business in different ways. And I think that that applies for creators, too. 18:02 I think that’s the kind of personal consumer side of this- And all that engagement with your customers, especially their complaints, is going to make your company better. 18:09 Yeah. Totally. Right? The fact that is all engaging with this AI is going to capture 18:14 the institutional knowledge and all of that can go into analytics which improves the AI and so on, so forth. 18:21 Yeah, yeah. So the business version of this is- that I think has a little more integration and we're still in a pretty early alpha with that. 18:29 But the AI Studio making it so that people can kind of create their UGC agents and different things, 18:34 and getting started on this flywheel of having creators create them. I'm pretty excited about that. So can I, can I use AI Studio to fine tune with my images, my collection of images? 18:44 Yeah, yeah, we're going to get there. And then I could, could I give it, load it with all the things that I've written, 18:50 use it as my RAG? Yeah. Basically. Okay. And then every time I come back to it, it loads up its memory again, 18:57 so it remembers where it left off last time. And we carry on our conversation as, though never nothing ever happened. 19:03 Yeah and look, I mean, like any product, it'll get better over time. The tools for training, it will get better. 19:09 It's not just about what you want it to say. I mean, I think generally creators and businesses have topics that they want to stay away from too. 19:15 So just getting better at all this stuff, I think the platonic version of this is not just text, right? 19:21 You almost want to just be able to, and this is a sort of an intersection with some of the codec avatar work that we're doing over time. You want to basically be able 19:27 to have almost like a, a video chat with the agent. 19:33 And I think we'll get there over time. I don't think that this stuff is that far off, but the flywheel is spinning really quickly, so it's exciting. 19:42 There is a lot of new stuff to build. And I think even if the progress on the foundation 19:47 models kind of stopped now, which I don't think it will, I think we'd have like five years of product innovation 19:54 for the industry to basically figure out how to most effectively use all the stuff that's gotten built so far. 20:01 But I actually just think the kind of foundation models and the progress on the fundamental research is accelerating. 20:06 So, that it's, a pretty wild time. Your vision- 20:12 It's all you know, you kind of made this happen. Why thank you. 20:20 In the last conversation, I - Thank you. 20:26 Yeah. You know, you know, you know, CEOs, we're delicate flowers. 20:31 We need a lot of back- Yeah. We're pretty grizzled at this point. 20:37 I think we're we're the two kind of longest standing founders in the industry, right? It's true. 20:44 It's true. I just- And your hair has gotten gray. 20:50 Mine has just gotten longer. Mine's gotten gray. Yours has gone curly, what's up? 20:56 It was always curly. That's why I kept it short. Okay. You know, I just. 21:03 If I'd known it was going to take so long to succeed, you would never would have started. No, I would have dropped out of college, just like you. 21:11 Get a head start. Well, that's a there's a good difference between our personalities. You got a 12 year head start. 21:18 That's pretty good. You know, you're doing pretty well. I'm gonna- 21:24 I'm going to be able to carry on. Let me just put it that way. Yeah. So, so, 21:30 the thing that I love about your vision that, 21:35 everybody can have an AI that every business can have an AI 21:43 In our company, I want every engineer and every software developer to have an AI. 21:49 And, or many AIs. The thing that I love about your vision 21:56 is you also believe that everybody and every company should be able to make their own AI. 22:03 So you actually open-sourced, when you open-sourced Llama I thought that was great. Llama 2.1, by the way, 22:09 I thought Llama 2 was probably the biggest event 22:15 in AI last year. And the reason for that- I mean, I thought it was the H100, but, 22:20 you know, it's, it's a chicken or the egg question. 22:27 That's a chicken or the egg question. Yeah. Which came first? The H100. 22:32 Well, Llama 2, it was, it was actually not the H100. Yeah, it was A100 yeah. Thank you. And so, 22:39 but the reason why I said it was the biggest event was because when that came out, 22:45 it activated every company, every enterprise and every industry. 22:51 All of a sudden, every health care company was building AI. Every company was building AI, every large company, small companies, startups were building AIs. 22:59 It made it possible for every researcher to be able to reengage AI again, because they have a starting point to do something with, 23:06 and then now, 3.1 is out and the excitement, just so you know, 23:12 you know, we work together to, to deploy, 3.1, we're taking it out to the world's enterprise. 23:19 And the excitement is just off the charts. And, and I think it's going to enable all kinds of applications. 23:24 But tell me about your your open-source philosophy. Where did that come from? And, you know, you open-sourced PyTorch. 23:31 And that it is now the framework by which AI is done. And, now you've open-sourced Llama 23:37 3.1 or Llama there's a whole ecosystem built around it. And so I think it's terrific. But where did that all come from? 23:44 Yeah. So there's a bunch of history on a lot of this. I mean, we've done a lot of open-source work over time. 23:53 I think part of it, you know, just bluntly is, you know, we got started 23:58 after some of the other tech companies, right, in building out stuff like the distributed computing infrastructure and the data centers. 24:05 And, you know, because of that, by the time that we built that stuff, it wasn't a competitive advantage. We're like, all right, we might as well make this open 24:12 and then we'll benefit from the ecosystem around that. So we had a bunch of projects like that. 24:18 I think the biggest one was probably Open Compute where we took our server designs, the network designs, 24:24 and eventually the data center designs and published all of that. And by having that become somewhat of an industry standard, 24:32 all the supply chains basically got organized around it, which had this benefit of saving money for everyone. 24:37 So by making it public, and open, we basically have saved billions of dollars from doing that. Well, Open Compute was also what made it possible for NVIDIA 24:45 HGXs, that we designed for one data center, all of a sudden, works in every data center. Awesome. 24:51 So that was an awesome experience. And then, you know, we've done it with a bunch of our 24:56 infrastructure tools, things like React, PyTorch. So I'd say by the time that Llama came around, 25:02 we were sort of positively predisposed towards doing this. 25:07 For, for AI models specifically. I guess there's a few ways that I look at this. 25:12 I mean, one is, you know it's been really fun building stuff over the last 20 years at the company. 25:19 One of the things that that has been sort of the most difficult has been kind of having to navigate the fact 25:26 that we ship our apps through our competitor’s mobile platforms. So in the one hand, the mobile platforms have been this huge boon to the industry. 25:34 That's been awesome. On the other hand, having to deliver your products through your competitors, 25:40 is challenging, right? And I also, you know, I grew up in a time where, you know, the first version of Facebook was on the web and that was open. 25:46 And then, as a transition to mobile, you know, the plus side of that was, you know, now everyone has a computer in their pocket. 25:51 So that's great. The downside is, okay, we're a lot more restricted in what we can do. So, when you look at these generations of computing there's this big recency bias 26:01 where everyone just looks at mobile and thinks, okay, because the closed ecosystem, because Apple basically won and set the 26:09 the terms of that. And like yeah, I know that there's more Android phones out there technically, but like Apple basically has the whole market. 26:15 and like all the profits. And basically Android is kind of following Apple in terms of the development of it. 26:21 So I think Apple pretty clearly won this generation. But it's not always like that where if you go back a generation, 26:28 you know, Apple was doing their kind of closed thing. But Microsoft, which as you know, it obviously wasn't like this perfectly open 26:36 company, but, you know, compared to Apple with Windows running on all the different OEMs and different software, different hardware 26:45 it was a much more open ecosystem and Windows was the leading ecosystem. It, basically in the kind of PC 26:53 generation of things, the open ecosystem won. And I am kind of hopeful 27:00 that in the next generation of computing, we're going to return to a zone where the open ecosystem wins and is the leading one again. 27:07 There will always be a closed one and an open one. I think that there's reasons to do both. There are benefits to both. 27:12 I'm not like a zealot on this. I mean, we do closed source stuff and not everything that we that we publish is open. 27:19 But I think in general for the computing platforms that the whole industry is building on, there's a lot of value for that if the software especially is open. 27:27 So that's really shaped my philosophy on this. And, for both AI with Llama 27:34 and with the work that we're doing in AR and VR, where we are basically making the Horizon OS that we're building for mixed reality, 27:41 an open operating system in the sense of, what Android or Windows was and basically making it so that 27:49 we're going to be able to work with lots of different hardware companies to make all different kinds of devices. 27:54 We basically just want to return the ecosystem to that level where that's going to be the open one. And I'm pretty optimistic 28:01 that in the next generation, the open ones are going to win. For us specifically 28:07 I just want to make sure that we have access to- I mean, this is sort of selfish, but, you know, after building this company for a while, 28:16 one of my things for the next 10 or 15 years is like, I just want to make sure that we can build the fundamental technology 28:21 that we're going to be building social experiences on, because there have just been too many things that I've tried to build and then have just been told, 28:28 nah, you can't really build that by the platform provider, that 28:34 like, we're going to go build all the way down and, and make sure that that- There goes our broadcast opportunity. 28:40 Yeah. No, sorry. Sorry. There's a beep. Yeah. 28:48 You know, I’ve been doing okay for, like, 20 minutes, but... get me talking about closed platforms and I get angry. 28:58 Hey, look, it is great. I think it's a great world. Where there are people who are dedicated 29:06 to build the best possible AIs, however they build it, and they offer it to the world, 29:13 you know, as a service. And then. But if you want to build your own AI, you could still also build your own AI. 29:19 So the ability to use an AI. You know, there's a lot of stuff, I prefer not to make this jacket myself. 29:26 I prefer to have this jacket made for me. You know what I'm saying? Yeah. But so the fact that. 29:31 So the fact that leather could be open source is not a useful concept for me, but I think the, the idea that you could, 29:38 you could have great services, incredible services as well as open service. Open ability. 29:43 Then we basically have the entire spectrum. But the thing that's, that you did with 3.1 that was really 29:52 great was you have 4 or 5B, you have 70B, you have 8B you could, you could use it for synthetic data generation, 30:00 use the larger models to essentially teach the smaller models. And although the larger models will be more general, it's less brittle, 30:08 you could still build a smaller model that fits in, you know, whatever operating domain or operating costs that you would like to have. 30:16 Meta guard, I think? Yeah Llama Guard. Yeah Llama Guard, Llama Guard for guard railing. Fantastic. 30:22 And so now and the way that you built the model, it's built in a transparent way. 30:29 You dedicated- You've got a world class safety team. World class ethics team. You could build it in such a way that everybody knows it's built properly. 30:36 And so I really love that part of it. Yeah and I mean, just to finish the thought from before, 30:42 before I got, I got sidetracked there for a detour. I do think there's this alignment where 30:47 we're building it because we want the thing to exist, and we want to not get cut off from some closed model. 30:53 Right? And, but this isn't just like a piece of software that you can build. 30:59 It's, you know, you need an ecosystem around it. And so it's almost like it kind of almost wouldn't even work that 31:05 well if we didn't open source it. Right? It's not we're not doing this because we're kind of altruistic people. 31:12 Even though I think that this is going to be helpful for the ecosystem, and we're doing it because we think that this is going to make the thing that we're building the best by having a robust ecosystem. 31:20 Well, look how many people contributed to PyTorch ecosystem. Yeah, totally. Mountains of engineering. Yeah. Right. Yeah. Yeah. 31:27 I mean, NVIDIA alone, we probably have a couple of hundred people just dedicated to making PyTorch better and scalable and, you know, more performant and so on 31:34 and so forth. Yeah and it's also just when something becomes something of an industry 31:39 standard, other folks do work around it, right? So like all of the silicon in the systems will end up being optimized 31:46 to run this thing really well, which will benefit everyone, but it will also work well with the system that we're building. 31:52 And that's, I think, just one example of how this ends up being, just being really effective. 31:58 So, yeah, I mean, I think that the open-source strategy is going to be, yeah, it's just going to be a good one as a business strategy. 32:05 I think people still don't quite get it. We love it so much. We built an ecosystem around it. We build this thing Called AI Foundry. 32:11 Yeah. Yeah, yeah. I mean, you guys have been awesome. Yeah. I mean, every time we're shipping something, you you guys are the first to release this and optimize it and make it work. 32:18 And so I mean, I, I appreciate that. What can I say? We have good engineers you know and so. 32:25 Well you always just jump on this stuff quickly too. You know, I'm a senior citizen, but I'm agile. 32:34 You know, that's what CEOs have to do. And I recognize an important thing, I recognize an important thing. 32:40 And I think that Llama is genuinely important. We built this concept called an AI factory, uh, AI Foundry around it 32:46 so that we can help everybody build, take- you know, a lot of people, they have a desire to 32:53 build AI. And it's very important for them to own the AI because once they put that into their flywheel, their data flywheel, 33:00 that's how their company's institutional knowledge is encoded and embedded into an AI. 33:06 So they can't afford to have the AI flywheel, the data flywheel that experience flywheel somewhere else. 33:11 So and so open source allows them to do that. But they don't really know how to turn this whole thing into an AI 33:16 and so we created this thing called AI Foundry. We provide the tooling, we provide the expertise, 33:22 Llama technology, we have the ability to help them turn this whole thing, into an AI service. 33:29 And, and then when we're done with that, they take it, they own it. The output of it is what we call a NIM. 33:36 And this NIM, this this neuro micro NVIDIA Inference Microservice, they just download it, 33:41 they take it, they run it anywhere they like, including on-prem. And we have a whole ecosystem of partners, from OEMs that can run the NIMs to, 33:49 GSIs like Accenture that that we train and work with to create Llama-based NIMs and pipelines and and now 33:58 we're off helping enterprises all over the world do this. I mean, it's really quite an exciting thing. It's really all triggered off of, the Llama open-sourcing. 34:08 Yeah, I think especially the ability to help people distil their own models from the big model is going to be a really valuable new thing 34:15 because there's this, just like we talked about on the product side, how at least I don't 34:20 think that there's going to like one major AI agent that everyone talks to. At the same level, 34:26 I don't think that there's going to necessarily be one model that everyone uses. We have a chip AI, chip design AI, we have a software coding AI, 34:33 and our software coding AI understands USD because we code in USD for Omniverse stuff. 34:40 We have software AI that understands Verilog, our Verilog. we have we have software AI that understands our bugs database 34:48 and knows how to help us triage bugs and sends it to the right engineers. And so each one of these AIs are fine tuned off of Llama and, 34:56 and so we fine tune them, we guardrail them, you know if we have an AI design, for, 35:04 chip design, we're not interested in asking it about politics, you know, and religion and things like that. 35:10 So we guardrail it. And so, so I think, I think every company will essentially have for every single function 35:15 that they have, they will likely have AIs that are built for that. And they need help to do that. 35:22 Yeah. I mean, I think it's one of the big questions is going to be in the future, to what extent are people just using the kind of the bigger, more sophisticated models 35:29 versus just training their own models for the uses that they have? And at least I would bet that they're going to be 35:36 just a vast proliferation of different models. We use the largest ones. 35:42 And the reason for that is because our engineers, their time is so valuable. and so we get, right now we're getting 4 or 5B, optimized for performance. 35:51 And as you know, 405B doesn't fit in any GPU, no matter how big. And so that's why the NVLink performance is so important. 35:57 We have every one of our GPUs connected by this, non-blocking switch called NVLink switch. 36:03 And in the HGX for example, there are two of those switches and we make it possible for all these, all these GPUs to work and, 36:10 and run the 405Bs really performant. The reason why we do it is because the engineers’ times are so valuable to us. 36:18 You know, we want to use the best possible model, the fact that it's cost effective by a few pennies, 36:23 who cares? And so we just want to make sure that the best quality of results is presented to them. 36:29 Yeah. Well, I mean, the 405 I think is about half the cost to inference of the GPT 4o model. 36:34 So I mean, at that level, it's already I mean, it's pretty good. But yeah, I mean I think people are doing stuff on devices or want smaller models. 36:40 They're just going to distil it down. So that's like a whole different set of services. That AI is running, and let's pretend for a second that we're hiring that AI, 36:48 that AI for chip design is probably $10 an hour. You're using, 36:58 if you're using it constantly and you're sharing that AI across a whole bunch of engineers. So each engineer probably has an AI that's sitting with them. 37:06 And, you know, it doesn't cost very much. And we pay the engineers a lot of money. And so to us, a few dollars an hour, 37:14 amplifies the capabilities of somebody that's really valuable. Yeah, yeah. 37:21 I mean, you don't need to convince me. 37:27 If you haven't, if you haven't hired an AI, do it right away. That's all we're saying. 37:32 And so, let's talk about, 37:39 the next, the next wave. you know, one of the things that I really love about the work 37:44 that you guys do, computer vision, one of the models that we use a lot internally, 37:51 is Segment Everything, and, you know, that that we're now training AI models on video 37:59 so that we can understand the world model. Our use case is for robotics and industrial digitalization and, 38:10 connecting these AI models into Omniverse so that we can, we can, model and represent the physical world better, 38:18 have robots that operate in these Omniverse worlds better. Your application, 38:25 the Ray-Ban Meta glass, your vision for bringing AI into the virtual world, is really interesting. 38:33 Tell us about that. Yeah. Well, okay, a lot to unpack in there. the Segment Anything model that you're talking about, we're actually presenting, 38:41 I think the next version of that here at Siggraph. Segment Anything 2. 38:47 And it now works, it's faster, it works with, oh, here we go. 38:53 It works in video now as well. I think these are actually cattle 38:59 from my ranch in Kauai. By the way, these are called Mark’s Cows 39:05 Delicious Mark’s Cows. There you go. Next time we do- 39:12 So, Mark, Mark came over to my house and we made Philly cheesesteak together. Next time you're bringing the cow. 39:18 I’d say you did. I was more of a sous-chef. But, boy, that was really good. 39:24 It was really good. That sous-chef comment. Okay, listen, And then at the end of the night though, you were like, hey, so you ate enough, right? 39:31 And I was like, I don't know, I could eat another one. You're like, really? You know, usually when you say something to your guest. 39:38 I was definitely like, yeah, we're making more, we're making more. Did you get enough to eat? Usually your guest says, oh yeah, I'm fine. 39:46 Make me another cheesesteak Jensen. So just to let you know how OCD he is. 39:52 So I turn around, I'm prepping the, the cheesesteak and I said, Mark, cut the tomatoes. And so Mark, 40:00 I handed him a knife. Yeah, I'm a precision cutter. And so he cuts. He cuts the tomatoes. 40:05 Every single one of them are perfectly to the exact millimeter. But the really interesting thing is, I was expecting all the tomatoes 40:12 to be sliced and kind of stacked up, kind of like a deck of cards. 40:18 And, but when I turned around, he said he needed another plate. And the reason for that was because all of the tomatoes he cut, none of them touched each other. 40:29 Once he separates one slice of tomato from the other tomato, they shall not touch again. Yeah. Look, man, if you wanted them to touch, you needed to tell me that. 40:37 That’s why I’m just a sous-chef. Okay? That's why he needs an AI that doesn't judge. 40:44 Yeah, it's like. So this is super cool. Okay, so it's recognizing the cows track. 40:49 It's recognizing tracking the cows. Yeah, yeah. So it's, a lot of fun effects will be able to be made with this. 40:58 And because it'll be open a lot of more serious applications across the industry, too. 41:04 So, yeah, I mean, scientists use this stuff to, you know, study, like coral reefs and natural habitats and, 41:12 and kind of evolution of landscapes and things like that. But, I mean, it's, being able to do this and video and having it be a zero shot 41:19 and be able to kind of interact with it and tell it what you want to track is, it's pretty cool research. 41:25 So, for example, the reason why we use it, for example, you have a warehouse and they've got a whole bunch of cameras and the warehouse AI 41:34 is watching everything that's going on. And let's say you know, a stack of boxes fell, 41:40 or somebody spilled water on the ground, or, you know, whatever accident is about to happen, the AI recognizes it, generates the text, 41:48 sends it to somebody, and, you know, you know, help will come along the way. And so that's one way of using it, instead of recording everything. 41:56 If there's an accident, instead of recording every nanosecond of video and then going back and retrieve that moment, 42:03 it just records the important stuff because it knows what it's looking at. And so having a video understanding model, a video language model 42:13 is really, really powerful for all of these interesting applications. Now what else what else are you guys going to work on beyond- 42:22 talk to me about- Yes. There's all the smart glasses. Yeah. Right. So I think when we think about the next computing platform, 42:29 you know, we kind of break it down into mixed reality, the headsets and the smart glasses. 42:36 I think it's easier for people to wrap their head around that and wearing it because, you know, pretty much everyone who's wearing a pair of glasses 42:41 today will end up that'll get upgraded to smart glasses. And that's like more than a billion people in the world. So that's going to be a pretty big thing. 42:48 the VR MR headsets, I think some people find it interesting for gaming or different uses. 42:53 Some don't yet. Yet my view is that they're going to be both in the world. I think the smart glasses are going to be sort of the mobile phone, 43:01 kind of always on version of the next computing platform, 43:06 and the mixed reality headsets are going to be more like your workstation or your game console, where, when you're sitting down 43:14 for a more immersive session and you want access to more compute, I mean, look, I mean, the glasses are just very small form factor. 43:22 There are going to be a lot of constraints on that. Just like you can't do the same level of computing on a phone. It came at exactly the time 43:29 when all of these breakthroughs in generative AI happened. Yeah. So we basically for smart glasses, we've been we've been going at the problem 43:36 from two different directions on the one hand, we've been building what we think is sort of the technology that you need for the kind of ideal 43:45 holographic AR glasses and we're doing all the custom silicon work, all the custom display stack work, 43:52 like all the stuff that you need to do to make that work in their glasses. Right? It's not a headset. 43:57 It's not like a VR or MR headset. They look like glasses. But, they're still quite a bit far off from the glasses that you're wearing now. 44:06 I mean, those are very thin, but, but even the Ray-Bans that we that we make, you couldn't quite fit all the tech that you need to 44:12 into that yet for kind of full holographic AR, we're getting close. And over the next few years I think we'll basically get closer. 44:19 It'll still be pretty expensive, but I think that will start to be a product. 44:25 The other angle that we've come at this is let's start with good looking glasses. By partnering with the best glasses maker in the world, Essilor Luxottica. 44:35 They basically make they have all the big brands that you use. You know, it's 44:40 Ray-Ban or Oakley or Oliver Peoples or just like a handful of others. Yeah, it's kind of all Essilor Luxottica. 44:46 The NVIDIA of glasses. I think that, you know, I think they would probably like that 44:52 analogy, but, I mean, who wouldn't at this point? 44:58 So we've been working with them on the Ray-Bans. We're on the second generation. And the goal there has been, okay, let's constrain the form factor 45:04 to just something that looks great. And within that, let's put in as much technology as we can, 45:10 understanding that we're not going to get to the kind of ideal of what we want to fit into a technically, but 45:15 it'll, but at the end, it'll be like great looking glasses. And at this point we have we have camera sensors, 45:22 so you can take photos and videos. You can actually livestream to Instagram. You can take video calls on WhatsApp and stream to the other person 45:30 what you're seeing. You can, I mean, it has it has a microphone and speakers. 45:35 I mean, the speaker is actually really, really good. It’s open ear so really a lot of people find it more comfortable than, than earbuds. 45:42 you can listen to music and it's just like this private experience. That's pretty neat, people love that. You take phone calls on it. 45:50 but then it just turned out that that sensor package was exactly what you needed to be able to talk to AI too. 45:55 So that was sort of an accident. If you'd asked me five years ago, Were we going to get holographic 46:01 AR before AI, I would have said, yeah, probably. Right I mean, it's just seems like 46:07 kind of the graphics progression and the display progression on all the virtual and mixed reality stuff and building up the new display stack. 46:13 We're just making continual progress towards that. That's right. And then this breakthrough happened with LLMs. 46:20 And it turned out that we have sort of really high-quality AI now and getting better at a really fast rate before you have holographic AR. 46:28 So it's sort of this inversion that, that I didn't really expect. I mean, we're we're fortunately well positioned 46:34 because we were working on all these different products. But I think what you're going to end up with is, 46:39 just a whole series of different potential glasses products at different price points with different levels of technology in them. 46:46 So I kind of think, based on what we're seeing now with the Ray-Ban Metas, I would guess that display less AI glasses 46:56 at like a $300 price point are going to be a really big product that, like tens of millions of people 47:01 or hundreds of millions of people eventually are going to have. So you're going to have super interactive AI that you're talking to. 47:09 Yeah, visual. You have visual language understanding that you just showed you have real time translation. 47:15 You could talk to me in one language, I hear in another language. Then then the display is obviously going to be great too, 47:21 but it's going to add a little bit of weight to the glasses and it's going to make them more expensive. So I think for there will be a lot of people 47:26 who want the kind of full holographic display. But there are also going to be a lot of people for whom, 47:33 you know, they want something that eventually is going to be like really thin glasses and- Well for industrial applications and for some work applications, we need that. 47:41 I think, for consumer stuff too. You think so? Yeah. I mean, I think, you know, it's 47:46 I was thinking about this a lot during the, you know, during Covid when everyone kind of went remote for a bit. 47:52 It's like you're spending all this time on Zoom that's like, okay, this is 47:57 like it's great that we have this, but, but in the future we're like, not that many years away 48:04 from being able to have a virtual meeting where, like, you know, it's like, I'm not here physically. 48:10 It's just my hologram. Yeah. And like, it just feels like we're there and we're physically present. We can work on something and collaborate on something together. 48:17 But I think this is going to be especially important with AI. With that application I could live with, with a, a device that, that I'm not wearing all the time. 48:25 Oh yeah. But I think we're going to get to the point where it actually is. Yeah It’ll be, I mean, within glasses there's like thinner frames and there's thicker frames 48:32 and there's like all these styles. But so I don't, I think we're, we're a while away from having full holographic glasses 48:37 in the form factor of your glasses, but I think having it in a pair of stylish, kind of chunkier framed glasses is not that far off. 48:45 Sunglasses are face size these days. I could see that. Yeah. And you know, that's that's a very helpful style. 48:51 Yeah, sure. that's very helpful. You know, it's like I'm trying to, you know, I'm trying to make my way into becoming a style influencer. 49:00 So I can, like, influence this before, you know, before the glasses come to the market, but, you know? 49:05 Well I can see you attempting it. How's your style influencing working out for you? You know, it's early. Yeah? 49:13 It's early. It's early. But, I don't know, I feel like if a big part 49:18 of the future of the business is going to be building, kind of stylish glasses that people wear, 49:24 this is something I should probably start paying a little more attention to. That’s right. So, yeah, we're going to have to retire the version of me that wore the same thing every day. 49:32 But I mean, that's the thing about glasses, too. I think it's, you know, it's unlike, you know, even the watch 49:38 or phones, like, people really do not want to all look the same. 49:44 Right? And it's like, so I do think that it's, you know, it's a, it's a platform that I think is going to lend itself, 49:51 going back to the theme that we talked about before towards being an open ecosystem, because I think the diversity of form factors that people and styles 49:58 that people are going to demand is going to be immense. It's not like everyone is not going to want to put like the one kind of pair of glasses that, 50:06 you know, whoever else designs like, that's not I don't think that's going to fly for this. Yeah, I think that's right. 50:11 Well, Mark, it's sort of incredible that we're living through a time where the entire computing stack is being reinvented, how we think about software. 50:23 You know, what Andrej calls software one and software two. And now we're basically in software three now. 50:29 The way we compute, from general purpose computing to these generative neural network processing way of doing computing. 50:39 The capabilities, the applications we could develop now are unthinkable in the past. 50:45 And, and this technology, generative AI, I don't remember another technology 50:51 that that in such a fast rate, influenced consumers 50:57 enterprise, industries and science. And to be able to, to cut across, cut across, 51:05 all these different fields of science from, from climate tech to, biotech, 51:12 to physical sciences, in every single field that we're encountering, 51:18 generative AI is right in the middle of that, fundamental transition. 51:24 And in addition to that, the things that you're talking about, generative AI is going to make a profound impact in society. 51:33 You know, the products that we're making. And one of the things that I'm super excited about, and somebody asked me earlier, is there going to be a, you know, Jensen AI? 51:42 Well, that's exactly the creative AI you were talking about. You know, where we just build our own AIs and I, I load it up 51:48 with all of the things that I've written and I fine tune it with 51:53 the way I answer questions and hopefully, over time, the accumulation of use and, 52:00 you know, it becomes a really, really great assistant and companion, For a whole lot of people who just want 52:07 to, you know, ask questions or, bounce ideas off of and, and it'll be the version of Jensen 52:14 that as, as you were saying earlier, that's not judgmental. 52:19 You're not afraid of being judged. And so you could come and interact with it all the time. But I just think, I think that those are really incredible things. 52:27 And, you know, we write we write a lot of things all the time. And how incredible is it just to give it, you know, 3 or 4 topics. 52:34 Now, these are the basic themes of what I want to write about and write in my voice and just use that as a starting point. 52:40 So there's just so many things that we can do now. it's really terrific working with you. And, 52:47 I know that, I know that, it's not easy building a company, and you pivoted yours 52:54 from desktop to mobile to VR to AI, all these devices, it's really, really, really extraordinary to watch. 53:03 And NVIDIA's pivoted many times ourselves, and I know exactly how hard it is doing that. 53:08 And, you know, both of us have gotten kicked in our teeth a lot, plenty over the years. 53:14 But that's what it takes to, to want to be a pioneer and, innovate. So it's really great watching you. 53:21 Well. 53:29 And likewise, I mean, it's like, it's I'm not sure if it's a pivot if you keep doing the thing you were doing before, but as well. 53:36 But it's but you add to it. I mean there's more chapters to all, to all of this. And I think the same thing for, it's been fun watching... 53:44 I mean, the journey that you guys have been on, I mean, just and you, we went through this period where everyone was like, nah, everything is going to kind of move to these devices and, 53:52 you know, it's just going to get super kind of cheap compute. And you guys just kept on plugging away at this and it's like, no, like actually 53:59 you're going to want these big systems that can parallelize. You went the other way. Yeah. No. 54:06 We went and instead of building smaller and smaller devices, we made computers the size of warehouses. 54:11 A little unfashionable. Super unfashionable. Yeah, yeah. But now, now it's cool. And instead of, you know, we started building a graphics chip, a GPU, 54:21 and now when you, when, when you're deploying a GPU, you still call it Hopper H100, but so you guys know when, 54:29 Zuck calls it H100 his data center of 54:35 H100s, I think you're coming up on 600,000. We’re good customers. 54:46 That's how you get the Jensen Q&A at Siggraph. 54:54 Wow. Hang on. I was getting the Mark Zuckerberg Q&A. You were my guest. 55:00 And I wanted to make sure that- You just called me one day you're like, hey, you know, in like a couple of weeks, we're doing this thing at Siggraph. 55:06 I'm like, yeah, I don't think I'm doing anything that day. I'll fly to Denver. It sounds fun. Exactly. 55:11 I'm not doing anything that afternoon, you just showed up. But the thing that’s just incredible. 55:18 These systems that you guys build, they're giant systems. 55:23 Incredibly hard to orchestrate, incredibly hard to run. And, you know, you said that, you got into the GPU, 55:31 journey later than the most. but you're operating larger than just about anybody, 55:37 and it's incredible to watch. And congratulations on everything that you've done. And, you are quite the style icon now. 55:45 Check out this guy. Early stage, working on it. It's uh- Ladies and gentlemen, Mark Zuckerberg. 55:50 Thank you. 55:56 Hang on, hang on. Well, 56:01 you know, you know, so it turns out the last time that we got together, after dinner, 56:09 Mark and I were- Jersey swap. Jersey swap, and 56:15 we took a picture and it turned into something viral, and, and. 56:25 Now, I thought that he he has no trouble wearing my jacket. 56:31 I don't know, is that my look? It should be. 56:38 Is that right? Yeah, I actually, I, I made one for you. 56:47 You did? Yeah. That one's Mark's. I mean, here, let's see. 56:52 We got a box back here. It's black and leather and shearling. 56:58 Oh! 57:04 I didn't make this. I just ordered it online. Hang on a second. 57:09 It's a little chilly in here. I think I'll try this on. I think this is- My goodness. 57:15 I mean, it's a vibe you just need. Is this me? 57:23 Get this guy a chain. Next time I see you I’m bringing you a gold chain. So fair is fair. 57:29 So I let you know. I was telling everybody that Lori bought me a new jacket to celebrate this year's Siggraph. 57:36 Siggraph is a big thing in our company. As you could imagine. RTX was launched here. 57:42 amazing things were launched here. And this is a brand-new jacket. It's literally two hours old. Wow. 57:49 And so I think we oughta jersey swap again. All right. Well- This one’s yours. I mean, this is worth more because it's used. 58:00 Let's see. I don't know. I think I think Mark is pretty buff. 58:06 He's like, the guy is pretty jacked. I'm in. You too, man. 58:12 All right, all right, all right, everybody, thank you. Mark Zuckerberg have a great Siggraph. NVIDIA 1.93M subscribers Videos About LinkedIn Instagram Facebook

Saturday, May 10, 2025

NVIDIA CEO Jensen Huang's Vision for the Future

NVIDIA CEO Jensen Huang's Vision for the Future Cleo Abram 5.86M subscribers Subscribe 89K Share Download Save 2,651,667 views Jan 27, 2025 What NVIDIA is trying to build next… Subscribe for more optimistic science and tech stories from our show Huge If True. You're probably hearing a lot about AI, DeepSeek, NVIDIA and more right now. If you want the big picture (and to start from the beginning), watch this Huge Conversation with NVIDIA CEO Jensen Huang. In the last few years, NVIDIA has skyrocketed to become one of the world's most valuable companies. That's because, beginning in the 90s, they led a fundamental shift in how computers work, now unleashing the current explosion of what’s possible with technology. A huge amount of the most futuristic tech you’re hearing about - in AI, robotics, gaming, self-driving cars, breakthrough medical research - relies on new chips and software designed by him and his company. During the dozens of background interviews I did to prepare for this, what struck me most was how much Jensen Huang has already influenced all our lives over the last 30 years, and how many are saying it’s just the beginning of something even bigger… We all need to know what he’s building and why, and most importantly, what he’s trying to build next, so you can decide for yourself what you think of it. Welcome to the second episode of our new series, Huge Conversations… If you want to know what the most important people building the future are imagining it will look like, Huge Conversations is the show for you. This interview was recorded at CES in Las Vegas on January 7th, 2025. Watch our first episode of Huge Conversations with Mark Zuckerberg here: • The Future Mark Zuckerberg Is Trying ... Watch our trailer to understand more about the mission of Huge Conversations: • Something Big Is Coming... Chapters: 0:00 What is Jensen Huang trying to build? 1:40 The goal of this Huge Conversation 3:40 How did we get here? 4:25 What is a GPU? 5:45 Why video games first? 7:59 What is CUDA? 11:04 Why was AlexNet such a big deal? 15:40 Why are we hearing about AI so much now? 19:33 What are NVIDIA’s core beliefs? 21:34 Why does this moment feel so different? 24:08 What’s the future of robots? 30:15 What is Jensen’s 10-year vision? 32:00 What are the biggest concerns? 35:14 What are the biggest limitations? 38:05 How does NVIDIA make big bets on specific chips (transformers)? 42:33 How are chips made? 44:19 What’s Jensen’s next bet? 47:20 How should people prepare for this future? 50:12 How does this affect people’s jobs? 52:37 GeForce RTX 50 Series and NVIDIA DGX 55:50 What’s Jensen’s advice for the future? 59:07 How does Jensen want to be remembered? You can find me on Instagram here: / cleoabram On TikTok here: / cleoabram Or on Twitter here: / cleoabram Bio: Cleo Abram is a video journalist who produces Huge If True, an optimistic show about science and technology. Huge If True is an antidote to the doom and gloom, helping a wide audience see better futures they can help build. In each episode, Cleo dives deep into one innovation that could shape the future. She has explored humanoid robots at Boston Dynamics, supersonic planes at NASA, quantum computers at IBM, the Large Hadron Collider at CERN, and more. Every episode mixes high quality animations and detailed scripts with relatable vlog-style journeys, taking the audience along for an adventure to answer the question: If this works, what could go right? Previously, Cleo was a video producer at Vox and directed for Explained on Netflix. She was the host of Vox’s first ever daily show, Answered, as well as co-host of Vox’s YouTube Originals show, Glad You Asked. Vox: https://www.vox.com/authors/cleo-abram IMDb: https://www.imdb.com/name/nm10108242/ — Welcome to the joke down low: Why does a GPU without CUDA wear glasses? Because it can’t C! If you don’t get this joke yet, watch the rest of the episode! Find a way to use “C” in a comment to let me know you’re a real one who made it to the end of the description :) Transcript What is Jensen Huang trying to build? 0:00 At some point, you have to believe something. We've reinvented computing as we know it. What is the vision for what you see coming next? We asked ourselves, if it can do this, how far can 0:08 it go? How do we get from the robots that we have now to the future world that you see? Cleo, everything that moves will be robotic someday and it will be soon. We 0:17 invested tens of billions of dollars before it really happened. No that's very good, you 0:22 did some research! But the big breakthrough I would say is when we... 0:28 That's Jensen Huang, and whether you know it or not his decisions are shaping your future. He's the CEO of 0:36 NVIDIA, the company that skyrocketed over the past few years to become one of the most valuable companies in 0:41 the world because they led a fundamental shift in how computers work unleashing this current 0:46 explosion of what's possible with technology. "NVIDIA's done it again!" We found ourselves being one of the most important technology companies in the world and potentially ever. A huge amount of 0:56 the most futuristic tech that you're hearing about in AI and robotics and gaming and self-driving cars and breakthrough medical research relies on new chips and software designed by him and his 1:06 company. During the dozens of background interviews that I did to prepare for this what struck me most was how much Jensen Huang has already influenced all of our lives over the last 30 years, and how 1:16 many said it's just the beginning of something even bigger. We all need to know what he's building 1:22 and why and most importantly what he's trying to build next. Welcome to Huge Conversations... 1:36 Thank you so much for doing this. I'm so happy to do it. Before we dive in, I wanted to tell you The goal of this Huge Conversation 1:42 how this interview is going to be a little bit different than other interviews I've seen you do recently. Okay! I'm not going to ask you any questions about - you could ask - company finances, 1:51 thank you! I'm not going to ask you questions about your management style or why you don't like one-on ones. I'm not going to ask you about regulations or politics. I think all 2:01 of those things are important but I think that our audience can get them well covered elsewhere. Okay. 2:06 What we do on huge if true is we make optimistic explainer videos and we've covered - I'm the worst 2:13 person to be an explainer video. I think you might be the best and I think that's what I'm really hoping that we can do together is make a joint explainer video about how can we actually 2:25 use technology to make the future better. Yeah. And we do it because we believe that when people see those better futures, they help build them. So the people that you're going to be talking to 2:33 are awesome. They are optimists who want to build those better futures but because we 2:39 cover so many different topics, we've covered supersonic planes and quantum computers and particle colliders, it means that millions of people come into every episode without 2:48 any prior knowledge whatsoever. You might be talking to an expert in their field who doesn't know the difference between a CPU and a GPU or a 12-year-old who might grow up one day to be you 3:00 but is just starting to learn. For my part, I've now been preparing for this interview for 3:06 several months, including doing background conversations with many members of your team 3:11 but I'm not an engineer. So my goal is to help that audience see the future that you see so I'm going 3:18 to ask about three areas: The first is, how did we get here? What were the key insights that led to 3:23 this big fundamental shift in computing that we're in now? The second is, what's actually happening 3:29 right now? How did those insights lead to the world that we're now living in that seems like so much 3:34 is going on all at once? And the third is, what is the vision for what you see coming next? In order How did we get here? 3:42 to talk about this big moment we're in with AI I think we need to go back to video games in the 3:48 '90s. At the time I know game developers wanted to create more realistic looking graphics but 3:56 the hardware couldn't keep up with all of that necessary math. NVIDIA came up with 4:02 a solution that would change not just games but computing itself. Could you take us back 4:09 there and explain what was happening and what were the insights that led you and the NVIDIA 4:15 team to create the first modern GPU? So in the early '90s when we first started the company we observed that in a software program inside it there are just a few lines of code, maybe What is a GPU? 4:27 10% of the code, does 99% % of the processing and that 99% of the processing could be done 4:33 in parallel. However the other 90% of the code has to be done sequentially. It turns out that 4:40 the proper computer the perfect computer is one that could do sequential processing and parallel 4:45 processing not just one or the other. That was the big observation and we set out to build a company 4:52 to solve computer problems that normal computers can't. And that's really the beginning of NVIDIA. 5:00 My favorite visual of why a CPU versus a GPU really matters so much is a 15-year-old 5:05 video on the NVIDIA YouTube channel where the Mythbusters, they use a little robot shooting 5:11 paintballs one by one to show solving problems one at a time or sequential processing on a 5:16 CPU, but then they roll out this huge robot that shoots all of the paintballs at once 5:24 doing smaller problems all at the same time or parallel processing on a GPU. 5:30 "3... 2... 1..." So Nvidia unlocks all of this new power for video games. Why gaming first? The video games 5:41 requires parallel processing for processing 3D graphics and we chose video games because, Why video games first? 5:47 one, we loved the application, it's a simulation of virtual worlds and who doesn't want to go to 5:52 virtual worlds and we had the good observation that video games has potential to be the largest 5:58 market for for entertainment ever. And it turned out to be true. And having it being a large market 6:04 is important because the technology is complicated and if we had a large market, our R&D budget could 6:10 be large, we could create new technology. And that flywheel between technology and market and greater 6:17 technology was really the flywheel that got NVIDIA to become one of the most important technology companies in the world. It was all because of video games. I've heard you say that 6:25 GPUs were a time machine? Yeah. Could you tell me more about what you meant by that? A GPU is like a 6:31 time machine because it lets you see the future sooner. One of the most amazing things anybody's 6:37 ever said to me was a quantum chemistry scientist. He said, Jensen, because of NVIDIA's work, 6:46 I can do my life's work in my lifetime. That's time travel. He was able to do something that was beyond 6:52 his lifetime within his lifetime and this is because we make applications run so much faster 7:00 and you get to see the future. And so when you're doing weather prediction for example, you're seeing the future when you're doing a simulation a virtual city with virtual traffic and we're 7:11 simulating our self-driving car through that virtual city, we're doing time travel. So 7:17 parallel processing takes off in gaming and it's allowing us to create worlds in computers that 7:24 we never could have before and and gaming is sort of this this first incredible cas Cas of parallel 7:30 processing unlocking a lot more power and then as you said people begin to use that power across 7:37 many different industries. The case of the of the quantum chemistry researcher, when I've heard you 7:42 tell that story it's that he was running molecular simulations in a way where it was much faster to 7:49 run in parallel on NVIDIA GPUs even then than it was to run them on the supercomputer with the CPU 7:56 that he had been using before. Yeah that's true. So oh my god it's revolutionizing all of these other industries as well, it's beginning to change how we see what's possible with computers and my What is CUDA? 8:07 understanding is that in the early 2000s you see this and you realize that actually doing 8:14 that is a little bit difficult because what that researcher had to do is he had to sort of trick the GPUs into thinking that his problem was a graphics problem. That's exactly right, no that's 8:23 very good, you did some research. So you create a way to make that a lot easier. That's right 8:29 Specifically it's a platform called CUDA which lets programmers tell the GPU what to do using programming languages that they already know like C and that's a big deal because it gives way more 8:39 people easier access to all of this computing power. Could you explain what the vision was that led you to create CUDA? Partly researchers discovering it, partly internal inspiration and 8:53 and partly solving a problem. And you know a lot of interesting interesting ideas come out 9:00 of that soup. You know some of it is aspiration and inspiration, some of it is just desperation you 9:06 know. And so in the case of CUDA is very much this the same way and probably the first 9:13 external ideas of using our GPUs for parallel processing emerged out of some interesting work 9:19 in medical imaging a couple of researchers at Mass General were using it to do CT 9:26 reconstruction. They were using our graphics processors for that reason and it inspired us. 9:32 Meanwhile the problem that we're trying to solve inside our company has to do with the fact that when you're trying to create these virtual worlds for video games, you would like it to be beautiful 9:41 but also dynamic. Water should flow like water and explosions should be like explosions. So there's 9:50 particle physics you want to do, fluid dynamics you want to do and that is much harder to do if 9:56 your pipeline is only able to do computer graphics. And so we have a natural reason to want to do it 10:02 in the market that we were serving. So researchers were also horsing around with using 10:08 our GPUs for general purpose uh acceleration and and so there there are multiple multiple factors 10:13 that were coming together in that soup, we just when the time came and we decided 10:20 to do something proper and created a CUDA as a result of that. Fundamentally the reason why 10:25 I was certain that CUDA was going to be successful and we put the whole company behind it was 10:31 because fundamentally our GPU was going to be the highest volume parallel processors built in 10:38 the world because the market of video games was so large and so this architecture has a good chance of reaching many people. It has seemed to me like creating CUDA was this incredibly optimistic "huge 10:51 if true" thing to do where you were saying, if we create a way for many more people to use much 10:58 more computing power, they might create incredible things. And then of course it came true. They did. Why was AlexNet such a big deal? 11:04 In 2012, a group of three researchers submits an entry to a famous competition where the goal is 11:09 to create computer systems that could recognize images and label them with categories. And their 11:14 entry just crushes the competition. It gets way fewer answers wrong. It was incredible. It blows 11:20 everyone away. It's called AlexNet, and it's a kind of AI called the neural network. My understanding is one reason it was so good is that they used a huge amount of data to train that system 11:29 and they did it on NVIDIA GPUs. All of a sudden, GPUs weren't just a way to make computers faster 11:35 and more efficient they're becoming the engines of a whole new way of computing. We're moving from 11:40 instructing computers with step-by-step directions to training computers to learn by showing them a 11:47 huge number of examples. This moment in 2012 really kicked off this truly seismic shift that we're 11:54 all seeing with AI right now. Could you describe what that moment was like from your perspective and what did you see it would mean for all of our futures? When you create something new like 12:06 CUDA, if you build it, they might not come. And that's always the cynic's perspective 12:14 however the optimist's perspective would say, but if you don't build it, they can't come. And that's 12:20 usually how we look at the world. You know we have to reason about intuitively why this would be very useful. And in fac, in 2012 Ilya Sutskever, and Alex Krizhevsky and Geoff Hinton in the University 12:33 of Toronto the lab that they were at they reached out to a gForce GTX 580 because they learned about 12:39 CUDA and that CUDA might be able to to be used as a parallel processor for training AlexNet and 12:45 so our inspiration that GeForce could be the the vehicle to bring out this parallel architecture 12:51 into the world and that researchers would somehow find it someday was a good was a good strategy. It 12:57 was a strategy based on hope, but it was also reasoned hope. The thing that really caught 13:03 our attention was simultaneously we were trying to solve the computer vision problem inside the company and we were trying to get CUDA to be a good computer vision processor and we 13:13 were frustrated by a whole bunch of early developments internally with respect to our 13:19 computer vision effort and getting CUDA to be able to do it. And all of a sudden we saw AlexNet, 13:25 this new algorithm that is completely different than computer vision algorithms before 13:31 it, take a giant leap in terms of capability for computer vision. And when we saw that it was 13:38 partly out of interest but partly because we were struggling with something ourselves. And so we were 13:43 we were highly interested to want to see it work. And so when we when we looked at AlexNet we were 13:49 inspired by that. But the big breakthrough I would say is when we when we saw AlexNet, we 13:57 asked ourselves you know, how far can AlexNet go? If it can do this with computer vision, how 14:04 far can it go? And if it if it could go to the limits of what we think it could go, the type 14:11 of problems it could solve, what would it mean for the computer industry? And what would it mean for the computer architecture? And we were, we rightfully reasoned that if machine learning, 14:25 if the deep learning architecture can scale, the vast majority of machine learning problems 14:30 could be represented with deep neural networks. And the type of problems we could solve with machine 14:36 learning is so vast that it has the potential of reshaping the computer industry altogether, 14:42 which prompted us to re-engineer the entire computing stack which is where DGX came from 14:49 and this little baby DGX sitting here, all of this came from from that observation that we ought 14:56 to reinvent the entire computing stack layer by layer by layer. You know computers, after 65 years 15:03 since IBM System 360 introduced modern general purpose computing, we've reinvented computing as we 15:09 know it. To think about this as a whole story, so parallel processing reinvents modern gaming and 15:16 revolutionizes an entire industry then that way of computing that parallel processing begins to 15:22 be used across different industries. You invest in that by building CUDA and then CUDA and the 15:29 use of GPUs allows for a a step change in neural networks and machine learning and begins a sort 15:38 of revolution that we're now seeing only increase in importance today... All of a sudden Why are we hearing about AI so much now? 15:45 computer vision is solved. All of a sudden speech recognition is solved. All of a sudden language understanding is solved. These incredible problems associated with intelligence one 15:54 by one by one by one where we had no solutions for in past, desperate desire to have solutions 16:01 for, all of a sudden one after another get solved you know every couple of years. It's incredible. 16:07 Yeah so you're seeing that, in 2012 you're looking ahead and believing that that's 16:12 the future that you're going to be living in now, and you're making bets that get you there, really 16:17 big bets that have very high stakes. And then my perception as a lay person is that it takes a pretty long time to get there. You make these bets - 8 years, 10 years - so my question is: 16:30 If AlexNet that happened in 2012 and this audience is probably seeing and hearing so much more about 16:36 AI and NVIDIA specifically 10 years later, why did it take a decade and also because you 16:43 had placed those bets, what did the middle of that decade feel like for you? Wow that's a good question. It probably felt like today. You know to me, there's always some problem and 16:55 then there's some reason to be to be impatient. There's always some reason to be 17:03 happy about where you are and there's always many reasons to carry on. And so I think as I 17:09 was reflecting a second ago, that sounds like this morning! So but I would say that in all things that 17:16 we pursue, first you have to have core beliefs. You have to reason from your best principles 17:25 and ideally you're reasoning from it from principles of either physics or deep understanding of 17:32 the industry or deep understanding of the science, wherever you're reasoning from, you 17:38 reason from first principles. And at some point you have to believe something. And if those principles 17:45 don't change and the assumptions don't change, then you, there's no reason to change your 17:50 core beliefs. And then along the way there's always some evidence of you know of success and 17:59 and that you're leading in the right direction and sometimes you know you go a 18:04 long time without evidence of success and you might have to course correct a little but the evidence comes. And if you feel like you're going in the right direction, we just keep on going. 18:12 The question of why did we stay so committed for so long, the answer is actually the opposite: There 18:19 was no reason to not be committed because we are, we believed it. And I've believed in NVIDIA 18:28 for 30 plus years and I'm still here working every single day. There's no fundamental 18:34 reason for me to change my belief system and I fundamentally believe that the 18:39 work we're doing in revolutionizing computing is as true today, even more true today than it was before. And so we'll stick with it you know until otherwise. There's 18:51 of course very difficult times along the way. You know when you're investing in something and nobody 18:58 else believes in it and cost a lot of money and you know maybe investors or or others would rather 19:05 you just keep the profit or you know whatever it is improve the share price or whatever it is. 19:11 But you have to believe in your future. You have to invest in yourself. And we believe this so 19:17 deeply that we invested you know tens of billions of dollars before it really 19:25 happened. And yeah it was, it was 10 long years. But it was fun along the way. 19:32 How would you summarize those core beliefs? What is it that you believe about the way computers What are NVIDIA’s core beliefs? 19:38 should work and what they can do for us that keeps you not only coming through that decade but also 19:44 doing what you're doing now, making bets I'm sure you're making for the next few decades? The first 19:50 core belief was our first discussion, was about accelerated computing. Parallel computing versus 19:56 general purpose computing. We would add two of those processors together and we would do accelerated computing. And I continue to believe that today. The second was the recognition 20:06 that these deep learning networks, these DNNs, that came to the public during 2012, these deep neural 20:13 networks have the ability to learn patterns and relationships from a whole bunch of different types of data. And that it can learn more and more nuanced features if it could be larger 20:24 and larger. And it's easier to make them larger and larger, make them deeper and deeper or wider and wider, and so the scalability of the architecture is empirically true. The fact 20:40 that model size and the data size being larger and larger, you can learn more knowledge is 20:47 also true, empirically true. And so if that's the case, you could you know, what what are the 20:55 limits? There not, unless there's a physical limit or an architectural limit or mathematical limit 21:00 and it was never found, and so we believe that you could scale it. Then the question, the only other question is: What can you learn from data? What can you learn from experience? Data is basically 21:11 digital versions of human experience. And so what can you learn? You obviously can learn object 21:17 recognition from images. You can learn speech from just listening to sound. You can learn 21:22 even languages and vocabulary and syntax and grammar and all just by studying a whole bunch 21:27 of letters and words. So we've now demonstrated that AI or deep learning has the ability to learn 21:33 almost any modality of data and it can translate to any modality of data. And so what does that mean? Why does this moment feel so different? 21:42 You can go from text to text, right, summarize a paragraph. You can go from text to text, translate 21:49 from language to language. You can go from text to images, that's image generation. You can go from 21:55 images to text, that's captioning. You can even go from amino acid sequences to protein structures. 22:03 In the future, you'll go from protein to words: "What does this protein do?" or "Give me an example of a 22:11 protein that has these properties." You know identifying a drug target. And so you could 22:17 just see that all of these problems are around the corner to be solved. You can go from words 22:24 to video, why can't you go from words to action tokens for a robot? You know from the computer's 22:33 perspective how is it any different? And so it it opened up this universe of opportunities and 22:40 universe of problems that we can go solve. And that gets us quite excited. It feels like 22:48 we are on the cusp of this truly enormous change. When I think about the next 10 years, unlike the 22:56 last 10 years, I know we've gone through a lot of change already but I don't think I can predict 23:02 anymore how I will be using the technology that is currently being developed. That's exactly right. I 23:07 think the last 10, the reason why you feel that way is, the last 10 years was really about the science 23:12 of AI. The next 10 years we're going to have plenty of science of AI but the next 10 years is going to 23:18 be the application science of AI. The fundamental science versus the application science. And so the 23:24 the applied research, the application side of AI now becomes: How can I apply AI to digital biology? 23:31 How can I apply AI to climate technology? How can I apply AI to agriculture, to fishery, to robotics, 23:39 to transportation, optimizing logistics? How can I apply AI to you know teaching? How do I apply AI 23:47 to you know podcasting right? I'd love to choose a couple of those to help people see how 23:53 this fundamental change in computing that we've been talking about is actually going to change their experience of their lives, how they're actually going to use technology that is based 24:02 on everything we just talked about. One of the things that I've now heard you talk a lot about and I have a particular interest in is physical AI. Or in other words, robots - "my friends!" - meaning What’s the future of robots? 24:16 humanoid robots but also robots like self-driving cars and smart buildings or autonomous warehouses 24:23 or autonomous lawnmowers or more. From what I understand, we might be about to see a huge 24:29 leap in what all of these robots are capable of because we're changing how we train them. Up until 24:37 recently you've either had to train your robot in the real world where it could get damaged or wear 24:43 down or you could get data from fairly limited sources like humans in motion capture suits. But 24:50 that means that robots aren't getting as many examples as they'd need to learn more quickly. 24:56 But now we're starting to train robots in digital worlds, which means way more repetitions a day, way 25:03 more conditions, learning way faster. So we could be in a big bang moment for robots right now and 25:11 NVIDIA is building tools to make that happen. You have Omniverse and my understanding is this is 3D 25:19 worlds that help train robotic systems so that they don't need to train in the physical world. 25:26 That's exactly right. You just just announced Cosmos which is ways to make that 3D universe 25:34 much more realistic. So you can get all kinds of different, if we're training something on 25:39 this table, many different kinds of lighting on the table, many different times of day, many different you know experiences for the robot to go through so that it can get even more out of Omniverse. As 25:52 a kid who grew up loving Data on Star Trek, Isaac Asimov's books and just dreaming about a future with 26:00 robots, how do we get from the robots that we have now to the future world that you see of robotics? 26:08 Yeah let me use language models maybe ChatGPT as a reference for understanding Omniverse and 26:17 Cosmos. So first of all when ChatGPT first came out it, it was extraordinary and 26:24 it has the ability to do to basically from your prompt, generate text. However, as amazing as 26:32 it was, it has the tendency to hallucinate if it goes on too long or if it pontificates about 26:40 a topic it you know is not informed about, it'll still do a good job generating plausible answers. 26:46 It just wasn't grounded in the truth. And so people called it hallucination. And 26:55 so the next generation shortly it was, it had the ability to be conditioned by context, so 27:03 you could upload your PDF and now it's grounded by the PDF. The PDF becomes the ground truth. It 27:09 could be it could actually look up search and then the search becomes its ground truth. And 27:14 between that it could reason about what is how to produce the answer that you're asking for. And 27:21 so the first part is a generative AI and the second part is ground truth. Okay and so now let's 27:28 come into the the physical world. The world model, we need a foundation model just like 27:35 we need ChatGPT had a core foundation model that was the breakthrough in order for robotics 27:41 to to be smart about the physical world. It has to understand things like gravity, friction, inertia, 27:50 geometric and spatial awareness. It has to uh understand that an object is sitting there even 27:57 when I looked away when I come back it's still sitting there, object permanence. It has to 28:02 understand cause and effect. If I tip it, it'll fall over. And so these kind of physical 28:08 common sense if you will has to be captured or encoded into a world foundation model so that 28:16 the AI has world common sense. Okay and so we have to go, somebody has to go create that, and 28:23 that's what we did with Cosmos. We created a world language model. Just like ChatGPT was a language model, 28:29 this is a world model. The second thing we have to go do is we have to do the same thing that we did 28:35 with PDFs and context and grounding it with ground truth. And so the way we augment Cosmos 28:42 with ground truth is with physical simulations, because Omniverse uses physics simulation which 28:49 is based on principled solvers. The mathematics is Newtonian physics is the, right, it's the math we 28:56 know, all of the the fundamental laws of physics we've understood for a very long 29:02 time. And it's encoded into, captured into Omniverse. That's why Omniverse is a simulator. And using the 29:09 simulator to ground or to condition Cosmos, we can now generate an infinite number of stories of the 29:19 future. And they're grounded on physical truth. Just like between PDF or search plus ChatGPT, we can 29:30 generate an infinite amount of interesting things, answer a whole bunch of interesting questions. The 29:37 combination of Omniverse plus Cosmos, you could do that for the physical world. So to illustrate 29:43 this for the audience, if you had a robot in a factory and you wanted to make it learn every 29:49 route that it could take, instead of manually going through all of those routes, which could take days and could be a lot of wear and tear on the robot, we're now able to simulate all of them 29:59 digitally in a fraction of the time and in many different situations that the robot might face - it's dark, it's blocked it's etc - so the robot is now learning much much faster. It seems to 30:10 me like the future might look very different than today. If you play this out 10 years, how do you see What is Jensen’s 10-year vision? 30:17 people actually interacting with this technology in the near future? Cleo, everything that moves 30:22 will be robotic someday and it will be soon. You know the the idea that you'll be pushing around 30:28 a lawn mower is already kind of silly. You know maybe people do it because because it's fun but 30:35 but there's no need to. And every car is going to be robotic. Humanoid robots, the technology 30:44 necessary to make it possible, is just around the corner. And so everything that moves will be 30:50 robotic and they'll learn how to be a robot in Omniverse Cosmos and we'll generate 30:59 all these plausible, physically plausible futures and the the robots will learn from them and 31:05 then they'll come into the physical world and you know it's exactly the same. A future where 31:11 you're just surrounded by robots is for certain. And I'm just excited about having my own R2-D2. 31:18 And of course R2-D2 wouldn't be quite the can that it is and roll around. It'll be you know R2-D2 31:25 yeah, it'll probably be a different physical embodiment, but it's always R2. You know so my R2 31:32 is going to go around with me. Sometimes it's in my smart glasses, sometimes it's in my phone, sometimes it's in my PC. It's in my car. So R2 is with me all the time including you know when I get home 31:43 you know where I left a physical version of R2. And you know whatever that version happens to 31:49 be you know, we'll interact with R2. And so I think the idea that we'll have our own R2-D2 for 31:55 our entire life and it grows up with us, that's a certainty now yeah. I think a lot of news media What are the biggest concerns? 32:05 when they talk about futures like this they focus on what could go wrong. And that makes sense. There 32:10 is a lot that could go wrong. We should talk about what could go wrong so we could keep it from from going wrong. Yeah that's the approach that we like to take on the show is, what are the big challenges 32:19 so that we can overcome them? Yeah. What buckets do you think about when you're worrying about this future? Well there's a whole bunch of the stuff that everybody talks about: Bias or toxicity 32:30 or just hallucination. You know speaking with great confidence about something it knows nothing 32:37 about and as a result we rely on that information. Generating, that's a version of generating 32:45 fake information, fake news or fake images or whatever it is. Of course impersonation. 32:50 It does such a good job pretending to be a human, it could be it could do an incredibly good 32:56 job pretending to be a specific human. And so the spectrum of areas we 33:05 have to be concerned about is fairly clear and there's a lot of people who are 33:11 working on it. There's a some of the stuff, some of the stuff related to AI safety requires 33:18 deep research and deep engineering and that's simply, it wants to do the right thing it 33:24 just didn't perform it right and as a result hurt somebody. You know for example self-driving car 33:29 that wants to drive nicely and drive properly and just somehow the sensor broke down or it 33:36 didn't detect something. Or you know made it too aggressive turn or whatever it is. It did 33:41 it poorly. It did it wrongly. And so that's a whole bunch of engineering that has to 33:47 be done to to make sure that AI safety is upheld by making sure that the product functions properly. 33:54 And then and then lastly you know whatever what happens if the system, the AI wants to do a good 34:00 job but the system failed. Meaning the AI wanted to stop something from happening 34:07 and it turned out just when it wanted to do it, the machine broke down. And so this is 34:13 no different than a flight computer inside a plane having three versions of them and then 34:19 so there's triple redundancy inside the system inside autopilots and then you have two 34:25 pilots and then you have air traffic control and then you have other pilots watching out for 34:31 these pilots. And so that the AI safety systems has to be architected as a community 34:38 such that such that these AIs one, work, function properly. When they don't 34:47 function properly, they don't put people in harm's way. And that they're sufficient safety and security systems all around them to make sure that we keep AI safe. And so there's 34:58 this spectrum of conversation is gigantic and and you know we have to take the parts, take the 35:05 parts apart and and build them as engineers. One of the incredible things about this moment that 35:11 we're in right now is that we no longer have a lot of the technological limits that we had in a What are the biggest limitations? 35:17 world of CPUs and sequential processing. And we've unlocked not only a new way to do computing and 35:28 and but also a way to continue to improve. Parallel processing has a a different kind of physics to it 35:35 than the improvements that we were able to make on CPUs. I'm curious, what are the scientific or 35:42 technological limitations that we face now in the current world that you're thinking a lot about? Well everything in the end is about how much work you can get done within the limitations of 35:54 the energy that you have. And so that's a physical limit and the laws of 36:02 physics about transporting information and transporting bits, flipping bits and transporting 36:11 bits, at the end of the day the energy it takes to do that limits what we can get done. And the 36:18 amount of energy that we have limits what we can get done. We're far from having any fundamental limits that keep us from advancing. In the meantime, we seek to build better and more energy efficient 36:29 computers. This little computer, the the big version of it was $250,000 - Pick up? - Yeah 36:38 Yeah that's little baby DIGITS yeah. This is an AI supercomputer. The version that I delivered, 36:46 this is just a prototype so it's a mockup. The very first version was DGX 1, I 36:52 delivered to Open AI in 2016 and that was $250,000. 10,000 times more power, more energy necessary 37:03 than this version and this version has six times more performance. I know, it's incredible. We're 37:09 in a whole in the world. And it's only since 2016 and so eight years later we've in increased the 37:16 energy efficiency of computing by 10,000 times. And imagine if we became 10,000 times more energy 37:25 efficient or if a car was 10,000 times more energy efficient or electric light bulb was 37:31 10,000 times more energy efficient. Our light bulb would be right now instead of 100 Watts, 37:38 10,000 times less producing the same illumination. Yeah and so the energy efficiency of 37:45 computing particularly for AI computing that we've been working on has advanced incredibly and that's 37:51 essential because we want to create you know more intelligent systems and and we want to use more computation to be smarter and so energy efficiency to do the work is our number one 38:03 priority. When I was preparing for this interview, I spoke to a lot of my engineering friends and this How does NVIDIA make big bets on specific chips (transformers)? 38:09 is a question that they really wanted me to ask. So you're really speaking to your people here. You've 38:15 shown a value of increasing accessibility and abstraction, with CUDA and allowing more 38:21 people to use more computing power in all kinds of other ways. As applications of technology get more 38:28 specific, I'm thinking of transformers in AI for example... For the audience, a transformer is a very 38:35 popular more recent structure of AI that's now used in a huge number of the tools that you've 38:40 seen. The reason that they're popular is because transformers are structured in a way that helps them pay "attention" to key bits of information and give much better results. You could build chips 38:51 that are perfectly suited for just one kind of AI model, but if you do that then you're making them 38:56 less able to do other things. So as these specific structures or architectures of AI get more popular, 39:03 my understanding is there's a debate between how much you place these bets on "burning them into the 39:09 chip" or designing hardware that is very specific to a certain task versus staying more general and 39:15 so my question is, how do you make those bets? How do you think about whether the solution is a car 39:22 that could go anywhere or it's really optimizing a train to go from A to B? You're making bets 39:28 with huge stakes and I'm curious how you think about that. Yeah and that now comes back 39:33 to exactly your question, what are your core beliefs? And the question, the core 39:41 belief either one, that transformer is the last AI algorithm, AI architecture that any researcher will 39:52 ever discover again, or that transformers is a stepping stone towards evolutions of 40:01 transformers that are uh barely recognizable as a transformer years from now. And we believe the 40:08 latter. And the reason for that is because you just have to go back in history and ask yourself, 40:14 in the world of computer algorithms, in the world of software, in the world of 40:20 engineering and innovation, has one idea stayed along that long? And the answer is no. And so that's 40:27 kind of the beauty, that's in fact the essential beauty of a computer that it's able 40:34 to do something today that no one even imagined possible 10 years ago. And if you would have, if 40:41 you would have turned that computer 10 years ago into a microwave, then why would the applications 40:48 keep coming? And so we believe, we believe in the richness of innovation and the 40:54 richness of invention and we want to create an architecture that let inventors and innovators 40:59 and software programmers and AI researchers swim in the soup and come up with some amazing 41:05 ideas. Look at transformers. The fundamental characteristic of a transformer is this idea 41:10 called "attention mechanism" and it basically says the transformer is going to understand the meaning 41:16 and the relevance of every single word with every other word. So if you had 10 words, it has to figure 41:22 out the relationship across 10 of them. But if you have a 100,000 words or if your context is 41:27 now as large as, read a PDF and that read a whole bunch of PDFs, and the context window is now like 41:35 a million tokens, the processing all of it across all of it is just impossible. And so the way you 41:42 solve that problem is there all kinds of new ideas, flash attention or hierarchical attention or you 41:49 know all the, wave attention I just read about the other day. The number of different types of 41:54 attention mechanisms that have been invented since the transformer is quite extraordinary. 42:00 And so I think that that's going to continue and we believe it's going to continue and that 42:06 that computer science hasn't ended and that AI research have not all given up and we haven't 42:12 given up anyhow and that having a computer that enables the flexibility of 42:21 of research and innovation and new ideas is fundamentally the most important thing. One of the 42:29 things that I am just so curious about, you design the chips. There are companies that assemble the How are chips made? 42:37 chips. There are companies that design hardware to make it possible to work at nanometer scale. When 42:44 you're designing tools like this, how do you think about design in the context of what's physically 42:51 possible right now to make? What are the things that you're thinking about with sort of pushing 42:56 that limit today? The way we do it is even though even though we have things made like for 43:05 example our chips are made by TSMC. Even though we have them made by TSMC, we assume that we need 43:13 to have the deep expertise that TSMC has. And so we have people in our company who are incredibly 43:19 good at semiconductive physics so that we have a feeling for, we have an intuition for, what are the 43:25 limits of what today's semiconductor physics can do. And then we work very closely with them to 43:32 discover the limits because we're trying to push the limits and so we discover the limits together. Now we do the same thing in system engineering and cooling systems. It turns out plumbing is really 43:41 important to us because of liquid cooling. And maybe fans are really important to us because of air cooling and we're trying to design these fans in a way almost like you know they're 43:49 aerodynamically sound so that we could pass the highest volume of air, make the least amount of 43:54 noise. So we have aerodynamics engineers in our company. And so even though even though we don't 44:01 make 'em, we design them and we have to deep expertise of knowing how to have them made. And 44:09 and from that we try to push the limits. One of the themes of this conversation is 44:18 that you are a person who makes big bets on the future and time and time again you've been right What’s Jensen’s next bet? 44:25 about those bets. We've talked about GPUs, we've talked about CUDA, we've talked about bets you've made in AI - self-driving cars, and we're going to be right on robotics and - this is my question. What 44:37 are the bets you're making now? the latest bet we just described at the CES and I'm very very proud 44:43 of it and I'm very excited about it is the fusion of Omniverse and Cosmos so that we have 44:50 this new type of generative world generation system, this multiverse generation system. I 44:59 think that's going to be profoundly important in the future of robotics and physical systems. 45:06 Of course the work that we're doing with human robots, developing the tooling systems and the 45:11 training systems and the human demonstration systems and all of this stuff that that you've 45:17 already mentioned, we're just seeing the beginnings of that work and I think the 45:23 next 5 years are going to be very interesting in the world of human robotics. Of course the work that we're doing in digital biology so that we can understand the language of molecules and 45:34 understand the language of cells and just as we understand the language of physics and the 45:39 physical world we'd like to understand the language of the human body and understand the language of biology. And so if we can learn that, and we can predict it. Then all of a sudden our ability to 45:50 have a digital twin of the human is plausible. And so I'm very excited about that work. I love 45:56 the work that we're doing in climate science and be able to, from weather predictions, understand 46:03 and predict the high resolution regional climates, the weather patterns within a kilometer above 46:10 your head. That we can somehow predict that with great accuracy, its implications is really quite 46:17 profound. And so the number of things that we're working on is really cool. You know we 46:24 we're fortunate that we've created this this instrument that is a time machine and 46:37 we need time machines in all of these areas that we just talked about so that we can see 46:43 the future. And if we could see the future and we can predict the future then we have a better 46:48 chance of making that future the best version of it. And that's the reason why scientists 46:53 want to predict the future. That's the reason why, that's the reason why we try to predict the future 46:58 and everything that we try to design so that we can optimize for the best version. So if 47:05 someone is watching this and maybe they came into this video knowing that NVIDIA is an incredibly 47:12 important company but not fully understanding why or how it might affect their life and they're now 47:18 hopefully better understanding a big shift that we've gone through over the last few decades in How should people prepare for this future? 47:23 computing, this very exciting, very sort of strange moment that we're in right now, where we're sort 47:30 of on the precipice of so many different things. If they would like to be able to look into the 47:36 future a little bit, how would you advise them to prepare or to think about this moment that they're 47:42 in personally with respect to how these tools are actually going to affect them? Well there are 47:49 several ways to reason about the future that we're creating. One way to reason about it is, 47:57 suppose the work that you do continues to be important but the effort by which you 48:04 do it went from you know being a week long to almost instantaneous. You know that the 48:15 effort of drudgery basically goes to zero. What is the implication of that? This is, this 48:23 is very similar to what would change if all of a sudden we had highways in this country? 48:30 And that kind of happened you know in the last Industrial Revolution, all of a sudden we have interstate highways and when you have interstate highways what happens? Well you know suburbs start 48:40 to be created and and all of a sudden you know distribution of goods from east to west is 48:48 no longer a concern and all of a sudden gas stations start cropping up on highways and 48:55 and fast food restaurants show up and you know someone, some motels show up because people 49:03 you know traveling across the state, across the country and just wanted to stay somewhere for a few hours or overnight, and so all of a sudden new economies and new capabilities, new economies. 49:13 What would happen if a video conference made it possible for us to see each other without 49:19 having to travel anymore? All of a sudden it's actually okay to work further away from 49:24 home and from work, work and live further away. And so you ask yourself kind of 49:32 these questions. You know what would happen if I have a software programmer with me 49:40 all the time and whatever it is I can dream up, the software programmer could write for me. You 49:46 know what would, what would happen if I just had a seed of an idea and 49:54 and I rough it out and all of sudden a you know a prototype of a production was put in front 50:01 of me? And what how would that change my life and how would that change my opportunity? And you 50:07 know what does it free me to be able to do and and so on so forth. And so I think that the next How does this affect people’s jobs? 50:13 the next decade intelligence, not for everything but for for some things, would basically become 50:22 superhuman. But I can tell you exactly what that feels like. I'm surrounded 50:31 by superhuman people, super intelligence from my perspective because they're the best in the 50:38 world at what they do and they do what they do way better than I can do it. and I'm 50:46 surrounded by thousands of them and yet what it it never one day caused me to to think all of a 50:56 son I'm no longer necessary. It actually empowers me and gives me the confidence to go tackle more 51:05 and more ambitious things. And so suppose, suppose now everybody is surrounded by these 51:13 super AIs that are very good at specific things or good at some of the things. What would that 51:20 make you feel? Well it's going to empower you, it's going to make you feel confident and 51:25 and I'm pretty sure you probably use ChatGPT and AI and I feel more empowered today, more 51:32 confident to learn something today. The knowledge of almost any particular field, the barriers to 51:38 that understanding, it has been reduced and I have a personal tutor with me all of the time. And 51:44 so I think that that feeling should be universal. If there's one thing that I would 51:50 encourage everybody to do is to go get yourself an AI tutor right away. And that AI tutor could 51:56 of course just teach your things, anything you like, help you program, help you write, 52:03 help you analyze, help you think, help you reason, you know all of those things is going to 52:10 really make you feel empowered and and I think that going to be our future. We're 52:16 going to become, we're going to become super humans, not because we have super, we're going to become 52:21 super humans because we have super AIs. Could you tell us a little bit about each of these objects? 52:27 This is a new GeForce graphics card and yes and this is the RTX 50 Series. It is essentially GeForce RTX 50 Series and NVIDIA DGX 52:39 a supercomputer that you put into your PC and we use it for gaming, of course people today use it 52:45 for design and creative arts and it does amazing AI. The real breakthrough here and this is 52:52 this is truly an amazing thing, GeForce enabled AI and it enabled Geoff Hinton, Ilya Sutskever, 52:59 Alex Krizhevsky to be able to train AlexNet. We discovered AI and we advanced AI then AI came back 53:07 to GeForce to help computer graphics. And so here's the amazing thing: Out of 8 million pixels or so in 53:16 a 4K display we are computing, we're processing only 500,000 of them. The rest of them we use AI 53:24 to predict. The AI guessed it and yet the image is perfect. We inform it by the 500,000 pixels that we 53:32 computed and we ray traced every single one and it's all beautiful. It's perfect. And then we tell the 53:38 AI, if these are the 500,000 perfect pixels in this screen, what are the other 8 million? And it goes it 53:44 fills in the rest of the screen and it's perfect. And if you only have to do fewer pixels, are you 53:50 able to invest more in doing that because you have fewer to do so then the quality is better so the 53:58 extrapolation that the AI does... Exactly. Because whatever computing, whatever attention you have, whatever resources you have, you can place it into 500,000 pixels. Now this is a perfect example of 54:11 why AI is going to make us all superhuman, because all of the other things that it can do, it'll do 54:17 for us, allows us to take our time and energy and focus it on the really really valuable things that 54:23 we do. And so we'll take our own resource which is you know energy intensive, attention intensive, and 54:33 we'll dedicated to the few 100,000 pixels and use AI to superres, upres it you know to 54:39 everything else. And so this this graphics card is now powered mostly by AI and the computer 54:47 graphics technology inside is incredible as well. And then this next one, as I mentioned 54:52 earlier, in 2016 I built the first one for AI researchers and we delivered the first one to Open AI 54:58 and Elon was there to receive it and this version I built a mini mini version and the 55:06 reason for that is because AI has now gone from AI researchers to every engineer, every student, every 55:15 AI scientist. And AI is going to be everywhere. And so instead of these $250,000 versions we're 55:21 going to make these $3,000 versions and schools can have them, you know students can have them, and 55:28 you set it next to your PC or Mac and all of a sudden you have your own AI supercomputer. And 55:36 you could develop and build AIs. Build your own AI, build your own R2-D2. What do you feel like is 55:42 important for this audience to know that I haven't asked? One of the most important things I would 55:48 advise is for example if I were a student today the first thing I would do is to learn AI. How do What’s Jensen’s advice for the future? 55:54 I learn to interact with ChatGPT, how do I learn to interact with Gemini Pro, and how do I learn 56:00 to interact with Grok? Learning how to interact with with AI is not unlike being 56:10 someone who is really good at asking questions. You're incredibly good at asking questions and 56:17 and prompting AI is very very similar. You can't just randomly ask a bunch of questions 56:23 and so asking an AI to be assistant to you requires some expertise and 56:30 artistry and how to prompt it. And so if I were, if I were a student today, irrespective whether it's for for math or for science or chemistry or biology or doesn't matter what field of science 56:40 I'm going to go into or what profession, I'm going to ask myself, how can I use AI to do my job 56:46 better? If I want to be a lawyer, how can I use AI to be a better lawyer? If I want to be a better do doctor, how can I use AI to be a better doctor? If I want to be a chemist, how do I use AI to be 56:55 a better chemist? If I want to be a biologist, I how do I use AI to be a better biologist? That question 57:02 should be persistent across everybody. And just as my generation grew up as the first generation 57:10 that has to ask ourselves, how can we use computers to do our jobs better? Yeah the generation before 57:17 us had no computers, my generation was the first generation that had to ask the question, how do I 57:23 use computers to do my job better? Remember I came into the industry before Windows 95 right, 1984 57:32 there were no computers in offices. And after that, shortly after that, computers started to emerge and 57:38 so we had to ask ourselves how do we use computers to do our jobs better? The next generation doesn't 57:45 have to ask that question but it has to ask obviously next question, how can I use AI to do my job better? That is start and finish I think for everybody. It's a really exciting and scary and 57:59 therefore worthwhile question I think for everyone. I think it's going to be incredibly fun. AI is 58:04 obviously a word that people are just learning now but it's just you know, it's 58:10 made your computer so much more accessible. It is easier to prompt ChatGPT to ask it anything you 58:15 like than to go do the research yourself. And so we've lowered a barrier of understanding, we've 58:22 lowered a barrier of knowledge, we've lowered a barrier of intelligence, and and everybody really had to just go try it. You know the thing that's really really crazy 58:32 is if I put a computer in front of somebody and they've never used a computer there is no chance 58:37 they're going to learn that computer in a day. There's just no chance. Somebody really has to 58:43 show it to you and yet with ChatGPT if you don't know how to use it, all you have to do is 58:49 type in "I don't know how to use ChatGPT, tell me," and it would come back and give you some 58:54 examples and so that's the amazing thing. You know the amazing thing about intelligence is 59:02 it'll help you along the way and make you uh superhuman you know along the way. All right I have How does Jensen want to be remembered? 59:08 one more question if you have a second. This is not something that I planned to ask you but on the 59:13 way here, I'm a little bit afraid of planes, which is not my most reasonable quality, and 59:21 the flight here was a little bit bumpy mhm very bumpy and I'm sitting there and it's moving and 59:30 I'm thinking about what they're going to say at my funeral and after - She asked good questions, that's 59:37 what the tombstone's going to say - I hope so! Yeah. And after I loved my husband and my 59:44 friends and my family, the thing that I hoped that they would talk about was optimism. I hope that 59:49 they would recognize what I'm trying to do here. And I'm very curious for you, you've you've been 59:56 doing this a long time, it feels like there's so much that you've described in this vision ahead, what would the theme be that you would want people to say about what you're trying to do? 1:00:14 Very simply, they made an extraordinary impact. I think that we're fortunate because of some 1:00:23 core beliefs a long time ago and sticking with those core beliefs and building upon them 1:00:32 we found ourselves today being one of the most, one of the many most important and 1:00:42 consequential technology companies in the world and potentially ever. And so 1:00:49 we take that responsibility very seriously. We work hard to make sure that 1:00:56 the capabilities that we've created are available to large companies as well as 1:01:03 individual researchers and developers, across every field of science no matter profitable or 1:01:10 not, big or small, famous or otherwise. And it's because of this understanding of 1:01:21 the consequential work that we're doing and the potential impact it has on so many people 1:01:27 that we want to make make this capability as pervasively as possible and I 1:01:37 do think that when we look back in a few years, and I do hope that what the 1:01:47 next generation realized is as they, well first of all they're going to know us because of 1:01:53 all the you know gaming technology we create. I do think that we'll look back and the whole 1:01:59 field of digital biology and life sciences has been transformed. Our whole understanding of of 1:02:06 material sciences has completely been revolutionized. That robots are helping 1:02:13 us do dangerous and mundane things all over the place. That if we wanted to drive we can drive 1:02:19 but otherwise you know take a nap or enjoy your car like it's a home theater of yours, 1:02:26 you know read from work to home and at that point you're hoping that you live far 1:02:31 away and so you could be in a car for longer. And you look back and 1:02:37 you realize that there's this company almost at the epicenter of all of that and happens 1:02:43 to be the company that you grew up playing games with. I hope for that to be what the next generation learn. 1:02:50 Thank you so much for your time. I enjoyed it, thank you! I'm glad!