AI transcript
0:00:16 Hello, and welcome to the NVIDIA AI podcast. I’m your host, Noah Kravitz. Today, we’re
0:00:22 looking back on the year in AI 2025. But before we begin, if you’re enjoying the AI podcast,
0:00:27 please take a moment to follow us on Apple, Spotify, or wherever you’re listening. Thanks.
0:00:36 Our year began with NVIDIA’s Mingyu Liu talking about the importance of world foundation models
0:00:43 to advancing physical AI in episode 240. Forty conversations later, Jacob Lieberman introduced
0:00:49 us to the future of enterprise storage, AI data platforms, in episode 281. Along the way were
0:00:54 advances in AI models and the infrastructure that they run on, like the rise of agentic AI and the
0:00:59 AI factory. We heard firsthand from pioneers in healthcare, higher education, life sciences,
0:01:05 marketing, and other industries about how they’re using AI to advance their fields and
0:01:10 make work better for the people doing it. And we talked to everyone from researchers to roboticists
0:01:15 about the dawn of physical AI, where intelligence moves from our screens into the robots building
0:01:22 our cars, assisting our surgeons, and walking among us. 2025 was quite the ride. Let’s dive in.
0:01:32 This year in AI began as last year ended, with lots of talk about agents and agentic AI. So what exactly
0:01:39 is an AI agent? An evolution in the way people use generative AI, agentic AI is a move away from simple
0:01:43 call-and-response-style chatbots towards systems that have true agency.
0:01:52 Chris Covert from in-world AI breaks this evolution down into phases in episode 243,
0:01:58 moving from simple conversation to an adaptive partner, and finally, full autonomy.
0:02:03 We have this, you know, first, again, is that conversational AI phase. And I’ll use a gaming
0:02:08 analogy, right? The conversational AI phase gives avatars, gives agents, I’ll use them interchangeably
0:02:13 today, extremely little agency in doing anything other than speaking, right? It may be able to respond
0:02:18 to my input if I ask it to do something, but it’s not going to physically change the state of something
0:02:24 other than the dialogue it’s going to tell me back. It is an adaptive partner phase, where the AI is
0:02:30 observing and responding to changes on its own. It’s not micromanaging every decision, but it feels
0:02:35 like you’re collaborating with an agent or a unit that has just enough context to make smart decisions
0:02:41 on its own. Like an evolution of a recommendation engine being driven by, you know, a cognition
0:02:46 engine here. So it’s not just learning, but it feels like it’s learning what we need even before
0:02:50 we ask it. Again, I think that’s phase three. I think there’s still a phase four. And I think that’s
0:02:56 a fully autonomous agent. And that stage, you know, again, continuing our analogy, is a player two,
0:03:02 right? Where player three is it’s adapting to us. Stage four is, hey, this thing is an agent on its own.
0:03:07 It feels like I’m playing against another human. It is making decisions that feel optimal to its own
0:03:13 objectives, alike to mine or not. The immediate payoff of this capability is freeing human workers from
0:03:19 repetitive, error-prone, non-creative tasks, what we often refer to as toil. But here’s the key.
0:03:25 We don’t need the agents to be perfect to be valuable. In fact, we don’t even need them to do all of the
0:03:30 work for us. As NVIDIA’s Bartley Richardson points out in episode 258.
0:03:33 If it gets you 75, 80% of the way there, that’s fantastic.
0:03:34 That’s great.
0:03:39 Because what’s, you know, I’m sure, you know, you do your fair share of writing, right? Like,
0:03:41 the hardest part for me about writing is that blank page.
0:03:42 That blank page, totally.
0:03:46 Right? And if I can get something that’s 80% of the way there, it’s great.
0:03:52 AI runs on data, and agentic AI is no different, which is good considering the sheer velocity of
0:03:57 information creation in today’s enterprise. Data growth is creating a widening gap between the data
0:04:04 we have and the insights we can actually extract. Cytoreasons Shy Shen Orr describes this challenge in
0:04:10 The Life Sciences in episode 276, comparing the struggle to keep up with data growth to the Red
0:04:12 Queen effect from Alice in Wonderland.
0:04:18 You can think about it like data is exponential, insight is linear. Everyday percent data utilized to give
0:04:27 insight is lower. The analytical side of this and the AI solutions for this have been missing. The field is
0:04:34 still largely a manual field where you give people some data, they sit in front of their computer, you know,
0:04:39 they try to figure out, they make some value and insight for this. And I figured that’s not a sustainable
0:04:47 solution. This field needs to move to ultimately build much larger integrative solutions that bring in many
0:04:54 different angles of machine learning, AI statistics, and so forth to ultimately bridge this. The data
0:04:59 insight gap keeps growing. So you basically are constantly in a game in which you need to make it
0:05:04 faster. Just, you know, it’s actually what’s called, you know, the, in evolution. And then you remember
0:05:11 Alice in Wonderland, the Red Queen, right? Where she said to Alice, you have to run just to stay in
0:05:17 place, but the Red Queen effect. So this, this need for us to continuously run is a huge driver for
0:05:24 automation, acceleration. And I would even say the cognitive meta-analysis that we as humans need to do
0:05:30 to somehow describe to a machine, how we make decisions so that we can automate them.
0:05:36 Right. To handle this exponential growth, we need massive compute resources. AI factories provide the
0:05:42 infrastructure to support enterprise-scale systems. But traditional ways of approaching storage had a
0:05:47 major flaw, data gravity. The data was heavy, and moving it created security risks. Here’s NVIDIA’s
0:05:55 Jacob Lieberman in episode 281. So far, in order to do AI, you’ve had to send your data out to some
0:06:01 kind of AI factory with a GPU, do all your processing and copy it back. Right. So the data has gravity,
0:06:07 and it turns out that instead of, instead of sending all your data to the GPU, you can actually send your
0:06:13 GPU to the data. And what that looks like is actually putting a GPU into your traditional storage
0:06:20 system on that same storage network and letting it operate on the data in place where it lives
0:06:26 without copying it out. And the advantage of generating these AI representations with the
0:06:31 source of truth data is that if the source of truth changes, you can immediately propagate those changes
0:06:37 to the representations. The new version reverses this. Instead of shipping data out, we bring the GPU
0:06:43 compute to the data. This evolves the AI factory from a distant processing plant into a unified,
0:06:49 efficient pipeline. Sarah Laszlo from Visa explains what this modern factory approach looks like in
0:06:58 episode 256. What that means to me is a single pipeline that goes from a data scientist with an
0:07:04 idea about a model they want to build all the way to the model running in production. It’s interesting
0:07:10 because I hadn’t really thought much about this AI factory terminology like I had heard it, but I
0:07:15 hadn’t really thought it was what I was doing until I came here to GTC and I started hearing other people
0:07:22 talking about it. And then I realized, oh, that’s what my platform does. So my platform, recently we’ve
0:07:28 adopted what we call the Ray Everywhere strategy. So we use AnyScale’s Ray ecosystem to do the whole
0:07:33 thing, the whole shebang. So data conditioning, model training, and model serving, we do all in Ray.
0:07:39 And it is, and it is intentionally trying to be more of this factory concept where it’s just, it’s,
0:07:45 there’s not a whole bunch of distinct parts or distinct tools that are living in different places
0:07:51 that work differently. It’s just one unified, consistent pipeline from start to finish.
0:07:55 This shift isn’t just about efficiency. It is critical for data sovereignty.
0:08:00 Countries and companies need to ensure their sensitive intelligence stays in their buildings
0:08:06 and on their own soil. In episode 247, Karen Hilson from Norwegian telecom operator Telenor
0:08:10 explains why they built a sovereign AI factory in Oslo.
0:08:17 So like Hive Autonomy, for example, I mean, they work with logistics, robotics. So they are actually
0:08:25 innovating, I say, a lot of industries, whether it’s ports, as I said, sort of factories or this sort
0:08:31 of in their operations and have efficiency cases. So they have very specific customer needs that they
0:08:37 are trying to solve. But they are, the reason sort of why they were very interested in coming to the
0:08:42 AI factory is that they’re sitting with sensitive data. So it was very keen. They wanted it to be
0:08:44 really on Norwegian soil, right?
0:08:49 The Telenor brand sort of represents security, you know, so there’s sort of a gain that really helps
0:08:56 them. And then the sustainability part is super key. And so that was sort of the combination of these
0:09:03 three. Capgemini is also a customer of ours. They are developing a product of doing voice-to-voice
0:09:09 translation. And we can say, yes, that can be done. But these are for sensitive dialogues.
0:09:15 Not all dialogues can go out in the cloud somewhere. And so very sort of sensitive dialogues, if you
0:09:21 think, you know, within the health sector, within the police. So not so much on program, but again,
0:09:28 it’s sort of a safe, secure environment. And that’s sort of really key. And another customer is working
0:09:35 a lot with the municipalities in Norway. And again, with sort of sensitive cases that they
0:09:38 sort of really would like their data to be secured.
0:09:44 To build trust in these factories, openness is essential. Jonathan Cohen from NVIDIA explains
0:09:50 in episode 278 how open models, like NVIDIA’s Nemotron family, allow for the customization
0:09:52 required by sovereign projects.
0:09:58 If you say, you know, NVIDIA trains a model, a Nemotron model, and it’s great. But since you’ve
0:10:01 disclosed all your training data and look at your training data, for whatever reason, we have some
0:10:07 policies for this data we can’t use. And we can say, that’s fine. Everything you need to reproduce what
0:10:11 we did is there. You can train your own model, excluding that data. Or you say, well, I like the
0:10:16 data, but the mix is wrong. I don’t know. I’m a sovereign project. And it really needs to be
0:10:21 very good at speaking this language and understanding this culture. And that data wasn’t as represented in
0:10:26 your training set as I want it to be. Everything that we did is transparent. And so you can make
0:10:27 these modifications yourself.
0:10:33 Since our first episode back in 2016, the AI podcast has told stories about the real-world
0:10:39 impact of artificial intelligence. 2025 was no different. A big area of impact this year was,
0:10:45 once again, in healthcare, where AI is helping with everything from drug discovery to reducing
0:10:51 physician burnout. Here’s Anne Oztuot of Moon Surgical from episode 272, explaining how their
0:10:53 maestro system supports surgeons.
0:11:01 Physician fatigue is absolutely real. We did our first inhuman study in Brussels in Belgium with
0:11:07 a surgeon, and he used the system over 50 cases. And he told us after a few weeks, “Hey,
0:11:15 when I get back home in the evening, my wife tells me that I’m, you know, a lot nicer than before.” So,
0:11:21 like, what’s going on? And, you know, I mean, he attributed that just to his own fatigue.
0:11:28 He’s like, you know, I end my day in a way that is a lot more relaxed. It’s about both the physical
0:11:29 and the mental loads.
0:11:35 With AI and healthcare, safety is the number one priority. Hippocratic AI has tackled this
0:11:40 by building a constellation architecture, using multiple AI models that constantly double-check
0:11:45 each other. CEO Munjal Shah describes how it works in episode 262.
0:11:49 We literally have multiple models double-checking each other. Right.
0:11:54 And what people don’t realize is that a lot of the models now, they say you can give a lot of
0:12:00 input tokens to them now. Just put it all in there. It’ll figure it out. And which Gemini is like,
0:12:04 what, a million? I think it is now, a million tokens. So it’s like, oh, okay, no problem.
0:12:07 Right. But it can’t reason across it all. Yeah.
0:12:10 They’ll show you examples of what we’ll call needle in haystacks, where it’ll be like, okay,
0:12:15 it’ll find that one thing. Yeah. I mean, grepping for a word is not that hard in computer science.
0:12:21 It’s like, we can find a word. But what you’re really trying to do is reason across it. So I’ll
0:12:26 give an example. If you ask your care manager, can I have ibuprofen? And they say, sure, you can have
0:12:31 ibuprofen, but don’t take too much. That’s fine, right? Because it’s an over-the-counter medication.
0:12:35 Unless you have chronic kidney disease stage three or four, then it’ll kill you. Right.
0:12:42 Well, if you put the rules for ibuprofen and CKD into GPT-4 and then ask it, it’ll do great.
0:12:47 If you put in all the rules for all condition-specific over-the-counter medications and ask,
0:12:52 it’ll still do pretty good. It’ll start missing some sometimes, which is still not okay,
0:12:56 because you could kill people, but fine. If you put in the patient’s medical history,
0:13:02 the patient’s last 10 conversations with you, all of those rules for over-the-counter medication
0:13:07 disallowance, and the current checklist for what you’re supposed to follow with that patient,
0:13:12 and maybe a few other things and then ask it, good luck. And what it is, is we have
0:13:16 an attention span problem. But if you have multiple models, we have these other models
0:13:21 only focused on checking one thing at a time. So there’s an overdose engine, and it listens to
0:13:24 every turn of the conversation. It’s like, are we talking about drugs? Are we talking about drugs?
0:13:29 Yes, we’re talking about drugs. Okay. And then it’s like, well, okay, did somebody just say a number
0:13:35 that’s an overdose relative to their prescription or relative to max toxicity of what you can have of
0:13:40 that drug? Okay, it did. And it may not seem that hard for pills versus two pills, but when you’re
0:13:43 talking about creams and injectables, it gets quite hard. Sure. I took a whole bunch of my
0:13:48 testosterone cream and I rubbed it on my hand. Was that an overdose? Right. I don’t know. How much
0:13:52 cream was in your hand? Right. What’s a little bit? What’s a little bit? Was it a pea size? Was it a
0:13:57 cherry tomato size? Was it an apple size? Right. RLM knows how to ask all these questions. Yeah. And
0:14:03 knows how to navigate assessing whether it’s actually an overdose. And you cannot have, if a patient shares an
0:14:08 overdose information with a care manager in a clinical setting, you need to do something. AI is also changing
0:14:13 healthcare from a totally different angle by transforming agriculture. Paul Mikesell of
0:14:19 Carbon Robotics explains in episode 270 why his company’s approach to weed control swaps chemical
0:14:28 herbicides for AI-guided lasers. I’ve also learned a lot about the quality of our food system. And I know
0:14:34 that there’s lots of discussion about this now. We are becoming more aware of it, that different herbicides
0:14:41 are being banned in Europe, United States, etc. We are learning about more of the long-term negative
0:14:46 health effects. Again, the ones who really suffer from it over their lifetime is the farmers who get
0:14:51 exposed to this stuff in much higher doses than the consumer. But even the consumer, you know, even you
0:14:58 right now are participating in some form of a multi-decade, maybe multi-generational science
0:15:04 studies experiment. We all have glyphosate in our system at any, and so if you were, if you take
0:15:10 everybody listening to this podcast right now, if we all went and did a urine sample, you would find
0:15:15 about 90% of us would have glyphosate in our system right now. What’s glyphosate? It’s the active ingredient
0:15:21 in Roundup. Right. We know that it’s carcinogenic. Like any carcinogenic, it’s only a question of exposure
0:15:27 over time. So, we should be able to, with the kinds of technology that are available today,
0:15:33 with the things that AI can do, we should be able to take a step back and say, do we really need to be
0:15:40 spraying this stuff on our food in order to grow it and survive as a population? Yeah. My answer to that
0:15:45 question, I think, is no, we don’t need to do that. And we should be able to do things like laser reader.
0:15:50 Yeah. Beyond the healthcare benefits, carbon robotics robots help farmers operate more
0:15:55 efficiently and sustainably. And they look really cool, too. Speaking of cool, in the world of
0:16:00 marketing and media, agents are fundamentally changing the relationship between brands and
0:16:07 consumers. Firsthand’s John Heller joined episode 242 to describe a shift where AI agents curate the web
0:16:12 specifically for the user’s intent. I had been working in the gaming world and some of the
0:16:18 generative AI abilities for gaming assets when language models really came out. And something
0:16:26 struck us, something very powerful, which is, and this is a metaphor for the math inside, but AI now
0:16:34 understands the ideas and intents, needs you may have from what you’re reading, what you’re watching,
0:16:40 what you might ask it outright. And it can go find the right response or take the right responding
0:16:47 action. And everything is presented to you in a very natural human way. And if you back up a step and think of
0:16:52 that happening all the way through a consumer’s use of the digital world, from when they’re searching and
0:16:58 becoming aware of things they might need, when they do some investigation and read up on products or services,
0:17:04 when they go to browse or shop, when they buy, all of those modes change pretty fundamentally. They don’t replace.
0:17:11 We think they get enhanced because instead of the world of the past, where I maybe did a search,
0:17:17 got some directions and a link, went to a place, read up on something, browsed for something,
0:17:22 went to, saw an ad maybe, went to another place to try to find the version I want. Those are all sort of
0:17:30 separate shops, the internet where it’s, you know, the same content everybody sees. AI instead is going
0:17:37 to understand and learn at each moment what it is you need. And as with most things, AI data is the
0:17:43 core. The people who have the most and best data about a product or service are the brands. They are
0:17:49 the retailers, the people who sell it. So they can create brand agents, which means your experience
0:17:55 on the internet at all of those moments in the journey from first learning about it to figuring out what the right
0:18:03 configuration is and comparing and browsing and buying is going to adapt on the fly through these agents. So it
0:18:10 doesn’t replace the web, but it changes things from you looking at stuff someone wrote to something that’s partially
0:18:16 adapting to what you actually need, understanding your needs. But the agents that are doing that for you are from the
0:18:23 retailers and brands themselves because it’s their data that is what you need. And that sort of changes the internet into
0:18:25 kind of your internet for both parties.
0:18:31 While software agents are transforming the digital world, a massive shift is happening in the physical world as well.
0:18:38 This is the dawn of physical AI, where AI models don’t just generate text or images, they control things that move,
0:18:44 like the aforementioned farm machinery. According to Sonia Fidler, VP of AI Research at NVIDIA,
0:18:49 the scale of this opportunity is staggering. Here’s Sonia in episode 249.
0:18:56 At the end of the day, robots need to operate in the physical world, in our world. And this world is
0:19:02 three-dimensional and conforms to the laws of physics. And there’s humans inside, right, that we need to
0:19:11 interact with. You know, we typically hear the term “such AI” that operates in the real physical world as “physical AI”. So I’ll
0:19:17 maybe use that term quite a lot, right? So physical AI is really kind of the upcoming big industry,
0:19:25 very likely larger than generative and agentic AI. You know, Jensen typically says everything that moves,
0:19:29 all devices that move will be autonomous, right? So that’s kind of the vision.
0:19:36 So a robot operating in the real world obviously needs to understand the world. What am I seeing? What is
0:19:44 everything I’m seeing, doing? How is it going to react to my action, right? So understanding, it needs to act.
0:19:51 But there is a catch. You can’t train a physical robot the same way you train a chatbot. If a chatbot makes
0:19:56 a mistake, you might get a typo. If a robot makes a mistake, it’ll probably break something. To solve
0:20:02 this, researchers like Mingyu Liu are building World Foundation models, AI that understands physics and
0:20:08 space-time, allowing robots to simulate thousands of futures before they take a single step in reality,
0:20:16 As Mingyu explains in episode 240. So I think World Foundation models is important to physical AI
0:20:23 developers. You know, physical AI are systems with AI deployed in the real world, right? And different to digital AI,
0:20:32 these physical AI systems that interact with the environment can create damage, right? So this could be real harm, right?
0:20:39 Right, right. So a physical AI system might be controlling a robotic arm or some other piece
0:20:46 of equipment changing the physical world. Yeah, I think there are three major use cases for physical AI.
0:20:52 Okay. It’s all around simulation. The first one is, you know, when you train a physical AI system,
0:20:57 you train a deep learning model, you have a thousand checkpoints. Do you know which one you want to deploy?
0:21:03 Right. Right. And if you deploy individually, it’s going to be very time-consuming. And so then it’s bad,
0:21:09 it’s going to damage your kitchen, right? So with the world model, you can do verification in the simulation.
0:21:17 Right. So you have, you can quickly test out this policy in many, many different kitchens. And before,
0:21:23 you know, you deploy in the real kitchen. And after this verification step, you may be narrowed down to
0:21:30 three chip points. And then you do the real deployments. So, you know, you can have an easier
0:21:35 life to deploy your physical AI. Once these brains are trained safely in digital worlds,
0:21:40 they need bodies. And while we may see many form factors in factories, there is a massive surge in
0:21:47 humanoid robotics going on outside those factory walls. Yashraj Narang from NVIDIA’s Seattle Robotics Lab
0:21:53 explains in episode 274 how this isn’t just an aesthetic choice. It’s a practical requirement
0:21:58 for robots that need to work alongside us. You know, there’s a group of people, you know,
0:22:03 forward-thinking people, Jensen very much included, this is near and dear to his heart, that felt that
0:22:09 time is right for this stream of humanoid robotics to finally be realized, right? You know, let’s,
0:22:14 let’s actually go for it. And, you know, this begs the question of why, why humanoids at all? You know,
0:22:19 why have people been so interested in humanoids? Why do people believe in humanoids? And I think that
0:22:24 the most common answer you’ll get to this, which I believe makes a lot of sense is that the world has
0:22:32 been designed for humans. You know, we have built everything for us, for our form factors, for our hands.
0:22:40 And if we want robots to operate alongside us in places that we go to every day, you know, in our home,
0:22:47 in the office, and so on, we want these robots to have our form. And in doing so, they can do a lot of
0:22:52 things, ideally, that we can. You know, we can go up and down stairs that were really built for the
0:22:58 dimensions of our legs. We can open and close doors that are located at a certain height and have a
0:23:05 certain geometry because they’re easy for us to grab. Humanoids could, you know, manipulate tools
0:23:11 like hammers and scissors and screwdrivers and pipettes if you’re in a lab, these sorts of things,
0:23:16 which were built for our hands. As AI moves from the screen to the physical world, it is also
0:23:23 fundamentally changing our creative and professional lives. In episode 265, Canva’s Danny Wu talks about
0:23:25 AI and creative superpowers.
0:23:31 You kind of see like the magic of Canva is integrating all the different steps and different
0:23:38 parts of design into a simple page, as I like to call it. And so we really invested in our content
0:23:45 library, in millions of templates that make it easier to start. And what we saw and got really excited
0:23:51 about AI was that firstly, we can offer, you know, all the amazing high quality content for people to use.
0:23:58 That when the user might want something, they might want to have an idea that didn’t necessarily exist. Maybe it has
0:24:04 actually never been created in the world. Like AI just gives us this superpower and ability to actually
0:24:10 create things on demand specifically for what someone has in mind or in mission and just kind of turn that
0:24:17 idea, turn that like, turn that search term or prompt into something they can use to express themselves.
0:24:22 But as these systems become more widespread, we must focus on inclusivity. We need to ensure that
0:24:28 the data feeding these models represents everyone. Angel Bush, founder of Black Women and AI, reminds
0:24:31 us of the goal of true equity in episode 250.
0:24:38 One of the things that I’ve always said to people is, I want Black women and artificial intelligence to be so
0:24:44 successful that it no longer has to exist. We’re really not looking for members. We’re looking for
0:24:50 people to be a part of a movement and really understand and trust that vision of the movement that
0:24:56 we’re going to make sure that you have all the tools you need in order to be a part of the AI economy,
0:25:01 in order to pivot into your career. And in education, leaders like Dr.
0:25:06 Cynthia Teniente-Matson at San Jose State University are teaching students that no matter how powerful
0:25:13 the tool, the human element remains essential. Here’s Dr. Teniente-Matson in episode 275.
0:25:19 There are some students who are using the tools that I’ve talked to for study guides. There are some
0:25:26 students that are using the tools for first drafts. I think, however we use the tools, it’s important,
0:25:31 if we’re going to be writing about things or communicating, that we’re citing references and
0:25:38 saying, you know, this was co-developed based on whatever sort of information they might have
0:25:46 retrieved from the instrument. And also to validate it, because no, these hallucinations exist. But as time
0:25:52 goes on, the hallucinations are diminishing, especially if you’re building your own custom
0:25:57 GPTs. That doesn’t mean mistakes aren’t going to happen. Sure. But that’s, as I say to students
0:26:04 regularly, Noah and faculty and staff, you are still the human in the loop. We’re not trying to replace
0:26:13 the human in the loop. Be, you know, have the tool be your co-pilot or your assistant that you’re
0:26:21 directing. So, looking back on 2025, what’s our best piece of guest-given advice for the year to come?
0:26:27 It’s simple. Start now. As Derek Slager of Imperity puts it in episode 271, if you’re still on the
0:26:33 sidelines when it comes to artificial intelligence, it’s high time to get in the game. I would say the one
0:26:38 piece of advice, and I give this advice a lot, is start now. It’s so important. It’s so important
0:26:43 because, like, it’s early, right? We’re still figuring out the patterns and the practices,
0:26:50 you know, like, as an industry, we’re learning a lot about kind of how to, you know, put these
0:26:56 incredible new technologies together in ways that really, you know, move the needle. And,
0:27:01 you know, right now, you just have a choice, right? You can be a doer who’s in that learning loop,
0:27:05 or you can be an observer and kind of, you know, wait and see. And I think, you know,
0:27:09 we talk a lot about this here, like, you know, speed’s the only thing that matters. And so,
0:27:13 I don’t think it’s viable in the current market to be outside that learning loop.
0:27:14 Right.
0:27:19 And the good news is it’s early, right? And so, you’re not too late, but it’s getting to
0:27:24 the point where pretty soon you’re late. And so, I think we’re certainly past the point. And again,
0:27:27 this is something that’s changed in the last six months. We’re past the point where people are like,
0:27:34 well, we’ll see if this AI thing plays out or not. Like, it’s overwhelmingly obvious where things are
0:27:40 going. And so, yeah, get off the sidelines, get in there, try stuff, learn. It’s easier than ever,
0:27:45 you know, to do that. There’s more information out there. And of course, you know, AI feeds itself,
0:27:48 right? AI can also help people figure out where to start and how to get through. And so,
0:27:53 yeah, start now and go really fast. That’s the path to success.
0:27:59 We are moving toward a future of collaboration, where human creativity is amplified by silicon
0:28:04 capability. NVIDIA’s Jacob Lieberman leaves us with this final thought on the partnership between
0:28:07 people and agents in episode 249.
0:28:16 There will be teams composed of carbon people and silicon agents, and they’re collaborating on tasks.
0:28:22 And at various times, the humans will be conducting the orchestra, and at other times,
0:28:27 the orchestra will be conducting itself. And that might be the most efficient way to get the work
0:28:32 done. Human judgment is critical. Human strategizing is critical. And there’s always room for that.
0:28:40 So it’s a way to complement the things that we’re very good at with some of the things where we could
0:28:42 use some help. Yeah.
0:28:51 2025 was an incredible year for AI, and all signs point to 2026 being full of more breakthroughs and
0:28:57 transformations in artificial intelligence and how we use it to change the ways we live and work.
0:29:02 Follow the NVIDIA AI podcast wherever you get your podcasts to stay up with the latest in the industry as
0:29:10 told by the people creating it. And browse the complete archive of episodes at ai-podcast.nvidia.com.
0:29:24 Thanks for listening.
0:29:24 you
0:00:22 looking back on the year in AI 2025. But before we begin, if you’re enjoying the AI podcast,
0:00:27 please take a moment to follow us on Apple, Spotify, or wherever you’re listening. Thanks.
0:00:36 Our year began with NVIDIA’s Mingyu Liu talking about the importance of world foundation models
0:00:43 to advancing physical AI in episode 240. Forty conversations later, Jacob Lieberman introduced
0:00:49 us to the future of enterprise storage, AI data platforms, in episode 281. Along the way were
0:00:54 advances in AI models and the infrastructure that they run on, like the rise of agentic AI and the
0:00:59 AI factory. We heard firsthand from pioneers in healthcare, higher education, life sciences,
0:01:05 marketing, and other industries about how they’re using AI to advance their fields and
0:01:10 make work better for the people doing it. And we talked to everyone from researchers to roboticists
0:01:15 about the dawn of physical AI, where intelligence moves from our screens into the robots building
0:01:22 our cars, assisting our surgeons, and walking among us. 2025 was quite the ride. Let’s dive in.
0:01:32 This year in AI began as last year ended, with lots of talk about agents and agentic AI. So what exactly
0:01:39 is an AI agent? An evolution in the way people use generative AI, agentic AI is a move away from simple
0:01:43 call-and-response-style chatbots towards systems that have true agency.
0:01:52 Chris Covert from in-world AI breaks this evolution down into phases in episode 243,
0:01:58 moving from simple conversation to an adaptive partner, and finally, full autonomy.
0:02:03 We have this, you know, first, again, is that conversational AI phase. And I’ll use a gaming
0:02:08 analogy, right? The conversational AI phase gives avatars, gives agents, I’ll use them interchangeably
0:02:13 today, extremely little agency in doing anything other than speaking, right? It may be able to respond
0:02:18 to my input if I ask it to do something, but it’s not going to physically change the state of something
0:02:24 other than the dialogue it’s going to tell me back. It is an adaptive partner phase, where the AI is
0:02:30 observing and responding to changes on its own. It’s not micromanaging every decision, but it feels
0:02:35 like you’re collaborating with an agent or a unit that has just enough context to make smart decisions
0:02:41 on its own. Like an evolution of a recommendation engine being driven by, you know, a cognition
0:02:46 engine here. So it’s not just learning, but it feels like it’s learning what we need even before
0:02:50 we ask it. Again, I think that’s phase three. I think there’s still a phase four. And I think that’s
0:02:56 a fully autonomous agent. And that stage, you know, again, continuing our analogy, is a player two,
0:03:02 right? Where player three is it’s adapting to us. Stage four is, hey, this thing is an agent on its own.
0:03:07 It feels like I’m playing against another human. It is making decisions that feel optimal to its own
0:03:13 objectives, alike to mine or not. The immediate payoff of this capability is freeing human workers from
0:03:19 repetitive, error-prone, non-creative tasks, what we often refer to as toil. But here’s the key.
0:03:25 We don’t need the agents to be perfect to be valuable. In fact, we don’t even need them to do all of the
0:03:30 work for us. As NVIDIA’s Bartley Richardson points out in episode 258.
0:03:33 If it gets you 75, 80% of the way there, that’s fantastic.
0:03:34 That’s great.
0:03:39 Because what’s, you know, I’m sure, you know, you do your fair share of writing, right? Like,
0:03:41 the hardest part for me about writing is that blank page.
0:03:42 That blank page, totally.
0:03:46 Right? And if I can get something that’s 80% of the way there, it’s great.
0:03:52 AI runs on data, and agentic AI is no different, which is good considering the sheer velocity of
0:03:57 information creation in today’s enterprise. Data growth is creating a widening gap between the data
0:04:04 we have and the insights we can actually extract. Cytoreasons Shy Shen Orr describes this challenge in
0:04:10 The Life Sciences in episode 276, comparing the struggle to keep up with data growth to the Red
0:04:12 Queen effect from Alice in Wonderland.
0:04:18 You can think about it like data is exponential, insight is linear. Everyday percent data utilized to give
0:04:27 insight is lower. The analytical side of this and the AI solutions for this have been missing. The field is
0:04:34 still largely a manual field where you give people some data, they sit in front of their computer, you know,
0:04:39 they try to figure out, they make some value and insight for this. And I figured that’s not a sustainable
0:04:47 solution. This field needs to move to ultimately build much larger integrative solutions that bring in many
0:04:54 different angles of machine learning, AI statistics, and so forth to ultimately bridge this. The data
0:04:59 insight gap keeps growing. So you basically are constantly in a game in which you need to make it
0:05:04 faster. Just, you know, it’s actually what’s called, you know, the, in evolution. And then you remember
0:05:11 Alice in Wonderland, the Red Queen, right? Where she said to Alice, you have to run just to stay in
0:05:17 place, but the Red Queen effect. So this, this need for us to continuously run is a huge driver for
0:05:24 automation, acceleration. And I would even say the cognitive meta-analysis that we as humans need to do
0:05:30 to somehow describe to a machine, how we make decisions so that we can automate them.
0:05:36 Right. To handle this exponential growth, we need massive compute resources. AI factories provide the
0:05:42 infrastructure to support enterprise-scale systems. But traditional ways of approaching storage had a
0:05:47 major flaw, data gravity. The data was heavy, and moving it created security risks. Here’s NVIDIA’s
0:05:55 Jacob Lieberman in episode 281. So far, in order to do AI, you’ve had to send your data out to some
0:06:01 kind of AI factory with a GPU, do all your processing and copy it back. Right. So the data has gravity,
0:06:07 and it turns out that instead of, instead of sending all your data to the GPU, you can actually send your
0:06:13 GPU to the data. And what that looks like is actually putting a GPU into your traditional storage
0:06:20 system on that same storage network and letting it operate on the data in place where it lives
0:06:26 without copying it out. And the advantage of generating these AI representations with the
0:06:31 source of truth data is that if the source of truth changes, you can immediately propagate those changes
0:06:37 to the representations. The new version reverses this. Instead of shipping data out, we bring the GPU
0:06:43 compute to the data. This evolves the AI factory from a distant processing plant into a unified,
0:06:49 efficient pipeline. Sarah Laszlo from Visa explains what this modern factory approach looks like in
0:06:58 episode 256. What that means to me is a single pipeline that goes from a data scientist with an
0:07:04 idea about a model they want to build all the way to the model running in production. It’s interesting
0:07:10 because I hadn’t really thought much about this AI factory terminology like I had heard it, but I
0:07:15 hadn’t really thought it was what I was doing until I came here to GTC and I started hearing other people
0:07:22 talking about it. And then I realized, oh, that’s what my platform does. So my platform, recently we’ve
0:07:28 adopted what we call the Ray Everywhere strategy. So we use AnyScale’s Ray ecosystem to do the whole
0:07:33 thing, the whole shebang. So data conditioning, model training, and model serving, we do all in Ray.
0:07:39 And it is, and it is intentionally trying to be more of this factory concept where it’s just, it’s,
0:07:45 there’s not a whole bunch of distinct parts or distinct tools that are living in different places
0:07:51 that work differently. It’s just one unified, consistent pipeline from start to finish.
0:07:55 This shift isn’t just about efficiency. It is critical for data sovereignty.
0:08:00 Countries and companies need to ensure their sensitive intelligence stays in their buildings
0:08:06 and on their own soil. In episode 247, Karen Hilson from Norwegian telecom operator Telenor
0:08:10 explains why they built a sovereign AI factory in Oslo.
0:08:17 So like Hive Autonomy, for example, I mean, they work with logistics, robotics. So they are actually
0:08:25 innovating, I say, a lot of industries, whether it’s ports, as I said, sort of factories or this sort
0:08:31 of in their operations and have efficiency cases. So they have very specific customer needs that they
0:08:37 are trying to solve. But they are, the reason sort of why they were very interested in coming to the
0:08:42 AI factory is that they’re sitting with sensitive data. So it was very keen. They wanted it to be
0:08:44 really on Norwegian soil, right?
0:08:49 The Telenor brand sort of represents security, you know, so there’s sort of a gain that really helps
0:08:56 them. And then the sustainability part is super key. And so that was sort of the combination of these
0:09:03 three. Capgemini is also a customer of ours. They are developing a product of doing voice-to-voice
0:09:09 translation. And we can say, yes, that can be done. But these are for sensitive dialogues.
0:09:15 Not all dialogues can go out in the cloud somewhere. And so very sort of sensitive dialogues, if you
0:09:21 think, you know, within the health sector, within the police. So not so much on program, but again,
0:09:28 it’s sort of a safe, secure environment. And that’s sort of really key. And another customer is working
0:09:35 a lot with the municipalities in Norway. And again, with sort of sensitive cases that they
0:09:38 sort of really would like their data to be secured.
0:09:44 To build trust in these factories, openness is essential. Jonathan Cohen from NVIDIA explains
0:09:50 in episode 278 how open models, like NVIDIA’s Nemotron family, allow for the customization
0:09:52 required by sovereign projects.
0:09:58 If you say, you know, NVIDIA trains a model, a Nemotron model, and it’s great. But since you’ve
0:10:01 disclosed all your training data and look at your training data, for whatever reason, we have some
0:10:07 policies for this data we can’t use. And we can say, that’s fine. Everything you need to reproduce what
0:10:11 we did is there. You can train your own model, excluding that data. Or you say, well, I like the
0:10:16 data, but the mix is wrong. I don’t know. I’m a sovereign project. And it really needs to be
0:10:21 very good at speaking this language and understanding this culture. And that data wasn’t as represented in
0:10:26 your training set as I want it to be. Everything that we did is transparent. And so you can make
0:10:27 these modifications yourself.
0:10:33 Since our first episode back in 2016, the AI podcast has told stories about the real-world
0:10:39 impact of artificial intelligence. 2025 was no different. A big area of impact this year was,
0:10:45 once again, in healthcare, where AI is helping with everything from drug discovery to reducing
0:10:51 physician burnout. Here’s Anne Oztuot of Moon Surgical from episode 272, explaining how their
0:10:53 maestro system supports surgeons.
0:11:01 Physician fatigue is absolutely real. We did our first inhuman study in Brussels in Belgium with
0:11:07 a surgeon, and he used the system over 50 cases. And he told us after a few weeks, “Hey,
0:11:15 when I get back home in the evening, my wife tells me that I’m, you know, a lot nicer than before.” So,
0:11:21 like, what’s going on? And, you know, I mean, he attributed that just to his own fatigue.
0:11:28 He’s like, you know, I end my day in a way that is a lot more relaxed. It’s about both the physical
0:11:29 and the mental loads.
0:11:35 With AI and healthcare, safety is the number one priority. Hippocratic AI has tackled this
0:11:40 by building a constellation architecture, using multiple AI models that constantly double-check
0:11:45 each other. CEO Munjal Shah describes how it works in episode 262.
0:11:49 We literally have multiple models double-checking each other. Right.
0:11:54 And what people don’t realize is that a lot of the models now, they say you can give a lot of
0:12:00 input tokens to them now. Just put it all in there. It’ll figure it out. And which Gemini is like,
0:12:04 what, a million? I think it is now, a million tokens. So it’s like, oh, okay, no problem.
0:12:07 Right. But it can’t reason across it all. Yeah.
0:12:10 They’ll show you examples of what we’ll call needle in haystacks, where it’ll be like, okay,
0:12:15 it’ll find that one thing. Yeah. I mean, grepping for a word is not that hard in computer science.
0:12:21 It’s like, we can find a word. But what you’re really trying to do is reason across it. So I’ll
0:12:26 give an example. If you ask your care manager, can I have ibuprofen? And they say, sure, you can have
0:12:31 ibuprofen, but don’t take too much. That’s fine, right? Because it’s an over-the-counter medication.
0:12:35 Unless you have chronic kidney disease stage three or four, then it’ll kill you. Right.
0:12:42 Well, if you put the rules for ibuprofen and CKD into GPT-4 and then ask it, it’ll do great.
0:12:47 If you put in all the rules for all condition-specific over-the-counter medications and ask,
0:12:52 it’ll still do pretty good. It’ll start missing some sometimes, which is still not okay,
0:12:56 because you could kill people, but fine. If you put in the patient’s medical history,
0:13:02 the patient’s last 10 conversations with you, all of those rules for over-the-counter medication
0:13:07 disallowance, and the current checklist for what you’re supposed to follow with that patient,
0:13:12 and maybe a few other things and then ask it, good luck. And what it is, is we have
0:13:16 an attention span problem. But if you have multiple models, we have these other models
0:13:21 only focused on checking one thing at a time. So there’s an overdose engine, and it listens to
0:13:24 every turn of the conversation. It’s like, are we talking about drugs? Are we talking about drugs?
0:13:29 Yes, we’re talking about drugs. Okay. And then it’s like, well, okay, did somebody just say a number
0:13:35 that’s an overdose relative to their prescription or relative to max toxicity of what you can have of
0:13:40 that drug? Okay, it did. And it may not seem that hard for pills versus two pills, but when you’re
0:13:43 talking about creams and injectables, it gets quite hard. Sure. I took a whole bunch of my
0:13:48 testosterone cream and I rubbed it on my hand. Was that an overdose? Right. I don’t know. How much
0:13:52 cream was in your hand? Right. What’s a little bit? What’s a little bit? Was it a pea size? Was it a
0:13:57 cherry tomato size? Was it an apple size? Right. RLM knows how to ask all these questions. Yeah. And
0:14:03 knows how to navigate assessing whether it’s actually an overdose. And you cannot have, if a patient shares an
0:14:08 overdose information with a care manager in a clinical setting, you need to do something. AI is also changing
0:14:13 healthcare from a totally different angle by transforming agriculture. Paul Mikesell of
0:14:19 Carbon Robotics explains in episode 270 why his company’s approach to weed control swaps chemical
0:14:28 herbicides for AI-guided lasers. I’ve also learned a lot about the quality of our food system. And I know
0:14:34 that there’s lots of discussion about this now. We are becoming more aware of it, that different herbicides
0:14:41 are being banned in Europe, United States, etc. We are learning about more of the long-term negative
0:14:46 health effects. Again, the ones who really suffer from it over their lifetime is the farmers who get
0:14:51 exposed to this stuff in much higher doses than the consumer. But even the consumer, you know, even you
0:14:58 right now are participating in some form of a multi-decade, maybe multi-generational science
0:15:04 studies experiment. We all have glyphosate in our system at any, and so if you were, if you take
0:15:10 everybody listening to this podcast right now, if we all went and did a urine sample, you would find
0:15:15 about 90% of us would have glyphosate in our system right now. What’s glyphosate? It’s the active ingredient
0:15:21 in Roundup. Right. We know that it’s carcinogenic. Like any carcinogenic, it’s only a question of exposure
0:15:27 over time. So, we should be able to, with the kinds of technology that are available today,
0:15:33 with the things that AI can do, we should be able to take a step back and say, do we really need to be
0:15:40 spraying this stuff on our food in order to grow it and survive as a population? Yeah. My answer to that
0:15:45 question, I think, is no, we don’t need to do that. And we should be able to do things like laser reader.
0:15:50 Yeah. Beyond the healthcare benefits, carbon robotics robots help farmers operate more
0:15:55 efficiently and sustainably. And they look really cool, too. Speaking of cool, in the world of
0:16:00 marketing and media, agents are fundamentally changing the relationship between brands and
0:16:07 consumers. Firsthand’s John Heller joined episode 242 to describe a shift where AI agents curate the web
0:16:12 specifically for the user’s intent. I had been working in the gaming world and some of the
0:16:18 generative AI abilities for gaming assets when language models really came out. And something
0:16:26 struck us, something very powerful, which is, and this is a metaphor for the math inside, but AI now
0:16:34 understands the ideas and intents, needs you may have from what you’re reading, what you’re watching,
0:16:40 what you might ask it outright. And it can go find the right response or take the right responding
0:16:47 action. And everything is presented to you in a very natural human way. And if you back up a step and think of
0:16:52 that happening all the way through a consumer’s use of the digital world, from when they’re searching and
0:16:58 becoming aware of things they might need, when they do some investigation and read up on products or services,
0:17:04 when they go to browse or shop, when they buy, all of those modes change pretty fundamentally. They don’t replace.
0:17:11 We think they get enhanced because instead of the world of the past, where I maybe did a search,
0:17:17 got some directions and a link, went to a place, read up on something, browsed for something,
0:17:22 went to, saw an ad maybe, went to another place to try to find the version I want. Those are all sort of
0:17:30 separate shops, the internet where it’s, you know, the same content everybody sees. AI instead is going
0:17:37 to understand and learn at each moment what it is you need. And as with most things, AI data is the
0:17:43 core. The people who have the most and best data about a product or service are the brands. They are
0:17:49 the retailers, the people who sell it. So they can create brand agents, which means your experience
0:17:55 on the internet at all of those moments in the journey from first learning about it to figuring out what the right
0:18:03 configuration is and comparing and browsing and buying is going to adapt on the fly through these agents. So it
0:18:10 doesn’t replace the web, but it changes things from you looking at stuff someone wrote to something that’s partially
0:18:16 adapting to what you actually need, understanding your needs. But the agents that are doing that for you are from the
0:18:23 retailers and brands themselves because it’s their data that is what you need. And that sort of changes the internet into
0:18:25 kind of your internet for both parties.
0:18:31 While software agents are transforming the digital world, a massive shift is happening in the physical world as well.
0:18:38 This is the dawn of physical AI, where AI models don’t just generate text or images, they control things that move,
0:18:44 like the aforementioned farm machinery. According to Sonia Fidler, VP of AI Research at NVIDIA,
0:18:49 the scale of this opportunity is staggering. Here’s Sonia in episode 249.
0:18:56 At the end of the day, robots need to operate in the physical world, in our world. And this world is
0:19:02 three-dimensional and conforms to the laws of physics. And there’s humans inside, right, that we need to
0:19:11 interact with. You know, we typically hear the term “such AI” that operates in the real physical world as “physical AI”. So I’ll
0:19:17 maybe use that term quite a lot, right? So physical AI is really kind of the upcoming big industry,
0:19:25 very likely larger than generative and agentic AI. You know, Jensen typically says everything that moves,
0:19:29 all devices that move will be autonomous, right? So that’s kind of the vision.
0:19:36 So a robot operating in the real world obviously needs to understand the world. What am I seeing? What is
0:19:44 everything I’m seeing, doing? How is it going to react to my action, right? So understanding, it needs to act.
0:19:51 But there is a catch. You can’t train a physical robot the same way you train a chatbot. If a chatbot makes
0:19:56 a mistake, you might get a typo. If a robot makes a mistake, it’ll probably break something. To solve
0:20:02 this, researchers like Mingyu Liu are building World Foundation models, AI that understands physics and
0:20:08 space-time, allowing robots to simulate thousands of futures before they take a single step in reality,
0:20:16 As Mingyu explains in episode 240. So I think World Foundation models is important to physical AI
0:20:23 developers. You know, physical AI are systems with AI deployed in the real world, right? And different to digital AI,
0:20:32 these physical AI systems that interact with the environment can create damage, right? So this could be real harm, right?
0:20:39 Right, right. So a physical AI system might be controlling a robotic arm or some other piece
0:20:46 of equipment changing the physical world. Yeah, I think there are three major use cases for physical AI.
0:20:52 Okay. It’s all around simulation. The first one is, you know, when you train a physical AI system,
0:20:57 you train a deep learning model, you have a thousand checkpoints. Do you know which one you want to deploy?
0:21:03 Right. Right. And if you deploy individually, it’s going to be very time-consuming. And so then it’s bad,
0:21:09 it’s going to damage your kitchen, right? So with the world model, you can do verification in the simulation.
0:21:17 Right. So you have, you can quickly test out this policy in many, many different kitchens. And before,
0:21:23 you know, you deploy in the real kitchen. And after this verification step, you may be narrowed down to
0:21:30 three chip points. And then you do the real deployments. So, you know, you can have an easier
0:21:35 life to deploy your physical AI. Once these brains are trained safely in digital worlds,
0:21:40 they need bodies. And while we may see many form factors in factories, there is a massive surge in
0:21:47 humanoid robotics going on outside those factory walls. Yashraj Narang from NVIDIA’s Seattle Robotics Lab
0:21:53 explains in episode 274 how this isn’t just an aesthetic choice. It’s a practical requirement
0:21:58 for robots that need to work alongside us. You know, there’s a group of people, you know,
0:22:03 forward-thinking people, Jensen very much included, this is near and dear to his heart, that felt that
0:22:09 time is right for this stream of humanoid robotics to finally be realized, right? You know, let’s,
0:22:14 let’s actually go for it. And, you know, this begs the question of why, why humanoids at all? You know,
0:22:19 why have people been so interested in humanoids? Why do people believe in humanoids? And I think that
0:22:24 the most common answer you’ll get to this, which I believe makes a lot of sense is that the world has
0:22:32 been designed for humans. You know, we have built everything for us, for our form factors, for our hands.
0:22:40 And if we want robots to operate alongside us in places that we go to every day, you know, in our home,
0:22:47 in the office, and so on, we want these robots to have our form. And in doing so, they can do a lot of
0:22:52 things, ideally, that we can. You know, we can go up and down stairs that were really built for the
0:22:58 dimensions of our legs. We can open and close doors that are located at a certain height and have a
0:23:05 certain geometry because they’re easy for us to grab. Humanoids could, you know, manipulate tools
0:23:11 like hammers and scissors and screwdrivers and pipettes if you’re in a lab, these sorts of things,
0:23:16 which were built for our hands. As AI moves from the screen to the physical world, it is also
0:23:23 fundamentally changing our creative and professional lives. In episode 265, Canva’s Danny Wu talks about
0:23:25 AI and creative superpowers.
0:23:31 You kind of see like the magic of Canva is integrating all the different steps and different
0:23:38 parts of design into a simple page, as I like to call it. And so we really invested in our content
0:23:45 library, in millions of templates that make it easier to start. And what we saw and got really excited
0:23:51 about AI was that firstly, we can offer, you know, all the amazing high quality content for people to use.
0:23:58 That when the user might want something, they might want to have an idea that didn’t necessarily exist. Maybe it has
0:24:04 actually never been created in the world. Like AI just gives us this superpower and ability to actually
0:24:10 create things on demand specifically for what someone has in mind or in mission and just kind of turn that
0:24:17 idea, turn that like, turn that search term or prompt into something they can use to express themselves.
0:24:22 But as these systems become more widespread, we must focus on inclusivity. We need to ensure that
0:24:28 the data feeding these models represents everyone. Angel Bush, founder of Black Women and AI, reminds
0:24:31 us of the goal of true equity in episode 250.
0:24:38 One of the things that I’ve always said to people is, I want Black women and artificial intelligence to be so
0:24:44 successful that it no longer has to exist. We’re really not looking for members. We’re looking for
0:24:50 people to be a part of a movement and really understand and trust that vision of the movement that
0:24:56 we’re going to make sure that you have all the tools you need in order to be a part of the AI economy,
0:25:01 in order to pivot into your career. And in education, leaders like Dr.
0:25:06 Cynthia Teniente-Matson at San Jose State University are teaching students that no matter how powerful
0:25:13 the tool, the human element remains essential. Here’s Dr. Teniente-Matson in episode 275.
0:25:19 There are some students who are using the tools that I’ve talked to for study guides. There are some
0:25:26 students that are using the tools for first drafts. I think, however we use the tools, it’s important,
0:25:31 if we’re going to be writing about things or communicating, that we’re citing references and
0:25:38 saying, you know, this was co-developed based on whatever sort of information they might have
0:25:46 retrieved from the instrument. And also to validate it, because no, these hallucinations exist. But as time
0:25:52 goes on, the hallucinations are diminishing, especially if you’re building your own custom
0:25:57 GPTs. That doesn’t mean mistakes aren’t going to happen. Sure. But that’s, as I say to students
0:26:04 regularly, Noah and faculty and staff, you are still the human in the loop. We’re not trying to replace
0:26:13 the human in the loop. Be, you know, have the tool be your co-pilot or your assistant that you’re
0:26:21 directing. So, looking back on 2025, what’s our best piece of guest-given advice for the year to come?
0:26:27 It’s simple. Start now. As Derek Slager of Imperity puts it in episode 271, if you’re still on the
0:26:33 sidelines when it comes to artificial intelligence, it’s high time to get in the game. I would say the one
0:26:38 piece of advice, and I give this advice a lot, is start now. It’s so important. It’s so important
0:26:43 because, like, it’s early, right? We’re still figuring out the patterns and the practices,
0:26:50 you know, like, as an industry, we’re learning a lot about kind of how to, you know, put these
0:26:56 incredible new technologies together in ways that really, you know, move the needle. And,
0:27:01 you know, right now, you just have a choice, right? You can be a doer who’s in that learning loop,
0:27:05 or you can be an observer and kind of, you know, wait and see. And I think, you know,
0:27:09 we talk a lot about this here, like, you know, speed’s the only thing that matters. And so,
0:27:13 I don’t think it’s viable in the current market to be outside that learning loop.
0:27:14 Right.
0:27:19 And the good news is it’s early, right? And so, you’re not too late, but it’s getting to
0:27:24 the point where pretty soon you’re late. And so, I think we’re certainly past the point. And again,
0:27:27 this is something that’s changed in the last six months. We’re past the point where people are like,
0:27:34 well, we’ll see if this AI thing plays out or not. Like, it’s overwhelmingly obvious where things are
0:27:40 going. And so, yeah, get off the sidelines, get in there, try stuff, learn. It’s easier than ever,
0:27:45 you know, to do that. There’s more information out there. And of course, you know, AI feeds itself,
0:27:48 right? AI can also help people figure out where to start and how to get through. And so,
0:27:53 yeah, start now and go really fast. That’s the path to success.
0:27:59 We are moving toward a future of collaboration, where human creativity is amplified by silicon
0:28:04 capability. NVIDIA’s Jacob Lieberman leaves us with this final thought on the partnership between
0:28:07 people and agents in episode 249.
0:28:16 There will be teams composed of carbon people and silicon agents, and they’re collaborating on tasks.
0:28:22 And at various times, the humans will be conducting the orchestra, and at other times,
0:28:27 the orchestra will be conducting itself. And that might be the most efficient way to get the work
0:28:32 done. Human judgment is critical. Human strategizing is critical. And there’s always room for that.
0:28:40 So it’s a way to complement the things that we’re very good at with some of the things where we could
0:28:42 use some help. Yeah.
0:28:51 2025 was an incredible year for AI, and all signs point to 2026 being full of more breakthroughs and
0:28:57 transformations in artificial intelligence and how we use it to change the ways we live and work.
0:29:02 Follow the NVIDIA AI podcast wherever you get your podcasts to stay up with the latest in the industry as
0:29:10 told by the people creating it. And browse the complete archive of episodes at ai-podcast.nvidia.com.
0:29:24 Thanks for listening.
0:29:24 you
The year in AI began with agents and brought us creative superpowers, robots on farms and in operating rooms, and so much more. Look back on AI in 2025 through the voices of the people who created it in this recap episode.
Listen to every episode: ai-podcast.nvidia.com
Leave a Reply
You must be logged in to post a comment.