NVIDIA’s Jacob Liberman on the Power of Agentic AI in the Enterprise – Ep. 250

0
0
AI transcript
0:00:15 Hello, and welcome to the NVIDIA AI podcast. I’m your host, Noah Kravitz.
0:00:20 Developers are excited about agentic AI, and they’re not alone. More and more enterprises
0:00:25 are deploying applications with agentic capabilities. But with the excitement comes new questions
0:00:30 and challenges. How widespread will adoption of agentic AI become? What should AI teams
0:00:35 be thinking about when designing and developing AI agent applications for the enterprise? And
0:00:40 what should they be thinking about during adoption? With us live from GTC 2025 to explore the growing
0:00:45 role of agentic AI in the enterprise is Jacob Lieberman. Jacob is a director of product management
0:00:51 at NVIDIA, where he leads a team building cloud-native gen AI software solutions. Currently, he’s focused
0:00:56 on building accelerated storage platforms that connect AI agents to enterprise data. And prior
0:01:02 to joining NVIDIA, Jacob held product management and engineering positions at Red Hat, AMD, and Dell.
0:01:06 Jacob, welcome to the AI podcast, and thanks so much for taking the time to join us.
0:01:08 Thank you for having me. I’m very happy to be here.
0:01:13 So should we start with the basics? And I’m just going to ask you, what is an AI agent?
0:01:21 So no, I’d say an AI agent is the latest evolution in the way people are using Gen AI. We’re kind of in
0:01:26 the third era of Gen AI use already, which is crazy when you think about it, because it’s really been
0:01:32 only 18 months or two years since the technology became widespread. But I would say that it started
0:01:39 out, people would chat with LLMs and they would use Gen AI as co-pilots and assistants to do their work.
0:01:46 Next, they used retrieval, augmented generation to attach their LLMs to data and chat about their
0:01:54 data. And now people are using large language models to reason and act and do things in the world.
0:02:02 So you could think about it this way. With an LLM, you could ask it to plan you a trip to Europe.
0:02:09 With an AI agent, you could ask it to plan you a trip to Europe and to book it for you and to give
0:02:15 it little cues like, hey, I like castles. And it will do the research and create an itinerary for you,
0:02:18 compare prices, and then actually book the trip.
0:02:24 Right. So before we get a little deeper, is an agent a large language model with different
0:02:30 capabilities? Is it a completely different piece of technology? Is there kind of a concise way,
0:02:36 if there’s not, that’s fine, to sort of just set the level for the listeners of how agents and LLMs and
0:02:37 other models work together?
0:02:49 Sure. So I would say that a large language model, when you train it, it’s a bit of a generalist and you can give it additional training data to make it more of a specialist.
0:02:57 You can instruction tune it to actually follow directions and call tools and learn how to use the tools.
0:03:04 So you can kind of think of an AI agent as, you know, you have a machine shop with a bunch of tools on the wall.
0:03:11 You’ve taken your LLM to school and it has a bachelor’s degree and then you give it a vocational degree and it knows how to use those tools on the wall now like an expert.
0:03:12 Right.
0:03:18 Fantastic. And then there are agents kind of popping up all over the place now and there are agent stores and things like that.
0:03:35 Is it a matter of kind of going out and finding the agent that you want and then do you have to ensure compatibility with the models that you’re using or is it designed so that, you know, you can grab an agent and put it into your workflow and these things generally work with each other?
0:03:47 That’s a great question. So I would say that one of the biggest challenges to AI agent adoption right now in the industry is a lack of standardization.
0:03:47 Okay.
0:03:56 So things have been standardized in terms of the approach, tuning agents to be able to use tools and to follow instructions.
0:04:06 But where we lack standardization is in the way that the agents communicate with one another and the way that they store their actions and represent it in memory.
0:04:11 That becomes important whenever you need agents to interact with each other.
0:04:19 It can add a lot of friction if they’re communicating with different protocols or storing their conversations in memory in different ways.
0:04:19 Right.
0:04:26 So typically you can grab an agent from, let’s say, any vendor off the shelf and apply it to some task.
0:04:31 But then when it goes out into the real world and starts interacting with other agents, that’s where things become tricky.
0:04:32 Gotcha.
0:04:36 And so how widespread do you foresee agentic AI becoming?
0:04:39 Is this the future or at least the, you know, sort of near-term, present future?
0:04:41 I think yes.
0:04:51 I think that the vast majority of tokens generated by large language models will be to enact and to serve agent communication.
0:04:53 So agents do two things.
0:04:56 They reason and they call tools and communicate.
0:04:57 I guess that’s three things.
0:05:01 So that act of reasoning, it’s almost like talking to yourself.
0:05:04 It’s generating a lot of intermediary tokens.
0:05:17 So if you look at it in terms of inference, a single agentic workload will usually generate a lot more tokens than a corresponding, let’s say, inference chat workload.
0:05:22 And the way I think things will evolve will be a lot like computational finance.
0:05:27 Not too long ago, the vast majority of stock trades were conducted by humans.
0:05:34 But in the world we live in now, probably 75%, 80% of them are conducted by machines with other machines.
0:05:42 And so if you look at that ratio, I would say we’re going to see the same thing where the vast majority of LLM inference will be between agents and not involve humans.
0:05:43 Right.
0:05:44 And what will they be doing?
0:05:53 Well, the agents will be doing many of the things that human workers do right now or maybe that human workers would like to do.
0:05:53 Right.
0:05:55 But they don’t have time to do.
0:05:59 Or things that human workers don’t like to do, which I call toil.
0:06:00 Yeah.
0:06:08 Toil is very, let’s say, repetitive, error-prone tasks that, you know, they’re not creative tasks.
0:06:11 They’re not necessarily productive tasks, but they take up a lot of our time.
0:06:11 Yeah.
0:06:15 That is probably where we will see the first stages of agent adoption.
0:06:15 Right.
0:06:16 Which I’m all for.
0:06:17 Yeah.
0:06:25 Freeing people from doing the kind of busy work that fills up a lot of their time and leaving them available to do the higher value work.
0:06:25 Right.
0:06:43 And then we’ve had, you know, dating back several years now, when I think about it, guests come on to the pod and talk about the metaphor of an orchestra conductor or a trained, you know, train station conductor is used a lot where the human will be assigning those tasks out to the agents, the AI systems.
0:06:46 And then almost being like the manager, right?
0:06:53 And you’ve got a fleet of agents going and doing the work for you and they come back and you review or you kind of prompt it to take a different tack or that kind of thing.
0:06:56 Is that still a viable metaphor?
0:06:57 You know, I’m not sure.
0:07:02 I think it’s very supportive of our human egos.
0:07:03 Right.
0:07:07 To believe that we would be in the best position to kind of conduct the orchestra.
0:07:07 Yes.
0:07:09 That’s not clear.
0:07:09 Yeah.
0:07:21 So probably what will happen is that there will be teams composed of carbon people and silicon agents and they’re collaborating on tasks.
0:07:28 And at various times the humans will be conducting the orchestra and at other times the orchestra will be conducting itself.
0:07:29 Conducting itself.
0:07:33 And that might be the most efficient way to get the work done.
0:07:33 Right.
0:07:36 But I do think it’s a comforting metaphor.
0:07:37 Right, right.
0:07:39 No, that’s, I mean, that’s well said.
0:07:41 And it was interesting when you said that.
0:07:45 It was one of those, I wasn’t expecting that response, but it made perfect sense, right?
0:07:49 Because if this keeps going the way it’s been going, agents are going to get pretty smart pretty fast.
0:07:50 They will.
0:07:56 And I think that there are unique characteristics of humans.
0:08:03 I mean, now I’m getting way outside of my role as a product manager at NVIDIA and I’m just kind of philosophizing.
0:08:03 That’s fine.
0:08:06 I think you called me a carbon human a few minutes ago, so I’m good with it.
0:08:07 You’re carbon, right?
0:08:08 You’re carbon.
0:08:09 You’re not a digital human.
0:08:09 No.
0:08:21 No, but I do think that, again, if we go back to the finance metaphor and computational finance, there are often algorithms that rebalance portfolios.
0:08:21 Right.
0:08:31 And maybe a human spot checks them, or maybe the human uses their intuition and experience to recognize some unique situation where they need to stray from that path.
0:08:31 Right.
0:08:42 And maybe the human knows some information or has some intuition about their individual clients that, well, maybe this risk profile is not right for this client.
0:08:45 So there’s always, there’s human judgment is critical.
0:08:48 Human strategizing is critical, and there’s always room for that.
0:08:57 So it’s a way to complement the things that we’re very good at with some of the things where we could use some help.
0:08:57 Yeah.
0:08:58 Toil.
0:08:58 Toil.
0:08:59 All right.
0:09:10 So let’s kind of flip perspective for a minute and talk about some of the challenges in the enterprise and, you know, in an organization when it comes to adopting Agentic AI.
0:09:11 Yes.
0:09:12 So my-
0:09:13 How much time do we have?
0:09:15 Well, no, it’s a great question.
0:09:21 So my primary role is to bring generative AI to every industry.
0:09:22 Right.
0:09:23 And there are challenges.
0:09:28 There are technological challenges and there are social challenges because work always occurs in a social context.
0:09:41 So on the technology side, we touched on this a bit earlier, but the lack of standardization across agent implementations can make agents working together very inefficient.
0:09:41 Right.
0:09:51 And this becomes a big problem because this work can be costly and enterprises want to extract the most benefit from their investment in the infrastructure.
0:09:55 So we have to make those communications more efficient.
0:09:58 Standardization is one way to drive that.
0:10:04 Another problem is that just like any language model, the work of an agent is not always deterministic.
0:10:11 What that means is, you know, there’s this notion of hallucination where if a model doesn’t know the answer, it might make something up.
0:10:13 And that can happen with an AI agent.
0:10:17 And enterprises need deterministic business outcomes.
0:10:17 Sure.
0:10:19 Because you’re, you know, betting your business on it.
0:10:27 So that’s another area where there’s things we can do to increase that determinism and to add checkpoints and whatnot to ensure that it’s there.
0:10:40 The other source of, I would say, indeterminacy is that a typical LLM interaction, you can more or less judge how much it’s going to cost in terms of the amount of tokens it generates.
0:10:59 Well, when you combine reasoning, which is kind of an unbounded source of tokens, potentially, with complex problems, now all of a sudden, you don’t, you can’t really predict at times how much that question will, that seemingly innocuous question will cost you.
0:10:59 Right.
0:11:00 Right.
0:11:05 It’s like when you travel to Europe and you’re using your roaming cell phone and you come back and you have a $5,000 bill.
0:11:05 Right.
0:11:07 And so that’s what we want to avoid.
0:11:11 And so I think enterprises, they need deterministic outcomes.
0:11:13 That’s the technological side.
0:11:20 On the social side, the autonomy of agents raises a lot of ethical and legal questions.
0:11:20 Of course.
0:11:26 When I first started working with enterprises, it was IT folks and security folks who would show up in the meetings.
0:11:30 But lately, a lot of lawyers have been showing up and HR people have been showing up.
0:11:33 So that’s just kind of an interesting shift.
0:11:34 And we’ll have to see where that goes.
0:11:37 Our guest is Jacob Lieberman.
0:11:51 Jacob is a director of product management here at NVIDIA, where he leads a team building cloud-native Gen.I. software solutions with a focus on accelerated storage platforms that can connect the enterprise and the enterprise data to all of the AI capabilities we’ve been talking about.
0:11:56 Jacob, you touched on this a little bit, but I think it’s something worth digging into more here.
0:12:09 This notion of agentic AI and autonomy and, you know, mentioning some of the systems where the human in the loop maybe checks in less often than some of the others because the system can go faster and the human can see, oh, everything’s cool.
0:12:11 And we’re going, how far do we take that?
0:12:14 How much autonomy should agents have?
0:12:16 I mean, technically speaking, theoretically, how much could they have?
0:12:18 But how much should they have?
0:12:22 And what are some of the conversations that, you know, you’ve been a part of thinking about this?
0:12:24 What’s the current thinking about agentic autonomy?
0:12:26 So that’s another great question.
0:12:33 This is a frequent question that we get and a cause of a lot of concern for people.
0:12:39 This notion that we’re going to kind of turn these agents loose on the world and they’ll be able to do whatever they want.
0:12:51 So what I usually tell people is that just like human workers, the roles and responsibilities of agents require a range of autonomy.
0:12:59 And in some places, in some roles, the agent needs wide latitude to kind of make decisions.
0:13:08 For example, if you’re a customer service AI agent and someone calls you up, they might have something missing from their order.
0:13:09 They might have a customer service complaint.
0:13:10 They might have this.
0:13:12 They might want to know what the other options are.
0:13:12 Sure.
0:13:15 And you really need to be creative in how you address that.
0:13:20 And probably the risk of giving that AI agent so much autonomy is relatively low.
0:13:21 Right.
0:13:29 So, and I guess then it’s kind of a, let’s say like a plot where you have autonomy on the X axis and risk on the Y.
0:13:36 There are other scenarios, say you have an AI agent that’s responsible for rebalancing your retirement portfolio.
0:13:36 Right.
0:13:41 There, you don’t want it to get very creative, you know, oh, we’re going all crypto.
0:13:44 No, you want it to kind of follow the tried and true formulas.
0:13:51 And you can embed that level of autonomy and determinism into the actions of the agent.
0:13:51 Right.
0:14:00 So, I think that the fear of these autonomous agents kind of running around is, you know, having fun and being wild.
0:14:02 I think that is a bit unfounded.
0:14:05 But there are situations where you want autonomy.
0:14:06 Of course.
0:14:06 Frankly.
0:14:09 Is there a, I don’t know, is there a best practice?
0:14:10 Is there a formula?
0:14:15 Not formula, but for kind of figuring out when you’re meeting with, you know, customers, potential customers.
0:14:18 How do you kind of find that balance?
0:14:21 Well, this is an area that’s still pretty nascent.
0:14:21 Yeah.
0:14:26 So, I would say we have a map.
0:14:27 We have a roadmap.
0:14:35 We know what we need to do because if you look at autonomy in the physical world, we’ve already seen these questions a bit with robotics.
0:14:37 And NVIDIA, of course, we work with robots.
0:14:41 We work in simulation and we work in the real world.
0:14:53 So, if it’s an autonomous vehicle you’re putting on the road, if it’s an AMR that is in a factory, if it’s a flight control system, an avionic co-pilot,
0:14:58 we have AI agents that assist tractors, you know, in the field.
0:15:03 All of those uses are governed by standards.
0:15:09 And the standards basically assess the level of risk inherent in the task and the level of risk mitigation that you need.
0:15:14 So, what I expect will happen is that we’re going to learn from AI autonomy in the physical world
0:15:19 and apply those lessons to how we deploy AI in the enterprise.
0:15:26 So, to dig into that a little deeper with what NVIDIA is doing now with agents and working in the enterprise and elsewhere,
0:15:31 can you talk a little bit about some of the things NVIDIA is doing in the space?
0:15:31 Yeah.
0:15:36 So, this will be a bit of a shameless plug for the work my team is doing.
0:15:36 Fantastic.
0:15:37 Yes.
0:15:41 I mean, after all, I’m a human agent and I need to feed my family.
0:15:46 So, the first thing that we’re doing, the first thing NVIDIA is doing and that my team is doing specifically,
0:15:50 is we’re building blueprints for AI agents.
0:15:56 And blueprints are reference architectures implemented in code where we show how you can take NVIDIA software
0:16:01 and apply it to some productive task in an enterprise to solve some real business problem.
0:16:11 And these blueprints are generally taken by global system integrators and service deployment partners,
0:16:17 service delivery partners who take them in, adapt them to their own portfolio, differentiate,
0:16:19 and then take them out to our customers at scale.
0:16:22 So, for example, we have a blueprint for a digital human.
0:16:32 The digital human can be made into a bedside digital nurse, a sportscaster, a bank teller
0:16:34 with just some verticalization.
0:16:36 I’m grateful you said sports and not pod.
0:16:37 Podcaster.
0:16:38 No, no.
0:16:41 That’s too far away from our capability.
0:16:41 Yeah.
0:16:47 And then also where these AI agents start to intersect the physical world, the thing I was just talking
0:16:52 about, we also have blueprints for teaching agents how to work in simulation and then deploying
0:16:53 them into the physical world.
0:16:53 Right, right, right.
0:16:53 Yeah.
0:16:54 So that’s very cool.
0:16:58 And then the other thing we’re working on, of course, NVIDIA, we’re an acceleration company.
0:16:58 Yeah.
0:17:00 We make things run faster.
0:17:06 The other thing we’re doing is we’re working with this diverse ecosystem of agent platform
0:17:11 builders, and we’re trying to make sure that they all run great on NVIDIA software.
0:17:11 Yeah.
0:17:15 Both the inference piece at scale and the distributed communications.
0:17:17 So it’s really those two things.
0:17:18 What are you hearing from developers?
0:17:24 What are they excited about when it comes to using agentic AI in the work they’re doing?
0:17:32 It’s very interesting and not surprising that software developers are among the earliest
0:17:35 adopters of agentic AI.
0:17:40 Now, a lot of the focus of agentic AI up until this point has been on consumer products.
0:17:41 Right.
0:17:46 But developers have kind of adopted these technologies at a very rapid rate.
0:17:52 And it’s really interesting when you watch people program now, they kind of co-develop with
0:17:53 the AI agent.
0:17:56 And they’ll say things like, document this for me.
0:17:58 How could I do this differently?
0:18:02 Okay, take this code and encapsulate it and make it multi-user.
0:18:10 So they’re actually using human language to ask the AI agent to perform some tasks, whether
0:18:12 they can do it themselves or not, is kind of immaterial.
0:18:15 It’s just more efficient to at least use the agent as a starting point.
0:18:15 Right.
0:18:16 Right.
0:18:20 Reasoning has kind of become a little bit of an it word over the past few months.
0:18:23 And when it comes to all of this stuff, what does it actually mean?
0:18:27 And we can stick with the developer context maybe to talk about it.
0:18:32 What does it actually add in if I’m a developer and I’m coding and I’m using a coding co-pilot
0:18:33 to do the things you’re just talking about?
0:18:39 If the co-pilot has reasoning capabilities versus not having them, what material difference
0:18:40 might that make?
0:18:48 So reasoning and the use of reasoning with large language models is kind of, it’s kind
0:18:54 of become the dominant paradigm and use case for large language models and agents in enterprise.
0:18:56 It’s kind of, is it the default at this point, basically?
0:18:57 It’s not the default.
0:18:58 No.
0:19:04 Because the more time you spend thinking about something, the longer it takes, the more latency
0:19:05 there is in terms of your reaction.
0:19:08 And in some use cases, it’s appropriate and some it isn’t.
0:19:13 But where it’s appropriate are doing things like, let’s say, biomedical research.
0:19:19 We have a blueprint that can be used to simulate developing molecules.
0:19:25 And so one of the things our customers can do is take a reasoning model, attach it to all
0:19:30 of their private research data, attach it to all the public research data on PubMed on the
0:19:36 internet, come up with a unique molecular design, and then pass it into our simulation software
0:19:39 to see if they can build it and make sure that it’s stable.
0:19:43 So there you don’t need a real-time interactive response.
0:19:47 You’re okay if the LLM goes off and thinks for a while before it comes back with an answer.
0:19:54 So the actual capability that’s emerging is something that we’re calling test-time compute.
0:19:58 Test-time compute is system-to-thinking.
0:19:59 It’s thinking about thinking.
0:20:05 The model will look at the way it’s solving the problem and decide if it’s doing it in the
0:20:08 most efficient way or in the best way, in the optimal way.
0:20:09 And it’s fairly interesting.
0:20:12 You can actually watch the reasoning models think.
0:20:13 Right, yeah, yeah.
0:20:14 Think through a problem.
0:20:14 Yeah.
0:20:16 And you asked about developers specifically.
0:20:21 You know, I saw this one reasoning model interacting with one of my coworkers.
0:20:25 And he said something to it like, I’m a product manager at NVIDIA.
0:20:30 Look at this GitHub repo and explain it to me as though I were five.
0:20:30 Right.
0:20:36 And then the reasoning agent actually thought, well, this will be difficult to explain to a
0:20:40 five-year-old, but this guy is a product manager at NVIDIA.
0:20:42 So clearly he’s not five years old.
0:20:46 So I will up my language a little bit to be more appropriate for his level.
0:20:47 Oh, that’s amazing.
0:20:51 Yeah, it was actually interesting to see the reasoning model work this out.
0:20:53 Maybe we can dig into some other use cases.
0:20:56 You were talking about development, but where else are we seeing?
0:21:00 Are you seeing agentic AI systems being used in their price?
0:21:01 Sure.
0:21:04 So we talked about software development.
0:21:06 We talked about research.
0:21:10 Research can span things like summarization.
0:21:16 Let’s say you have a bunch of complicated documents, a long email thread, a long Slack thread.
0:21:20 You could go through and read each one of those things, or you could ask an AI agent to
0:21:21 summarize it for you.
0:21:24 You could ask it to prepare structured reports for you.
0:21:31 So these are two variations that both require reasoning that are becoming dominant use cases
0:21:31 in enterprise.
0:21:37 Now, another really interesting thing we’re seeing in enterprise is that people are rewriting
0:21:43 solved problems, applications that already solve problems through automation, in this new
0:21:49 format, because they want to take advantage of the natural human interaction.
0:21:53 So for example, let’s say predictive failure analysis.
0:21:57 You have an offshore oil drilling rig.
0:22:00 It’s very costly if a component breaks.
0:22:04 You have all sorts of telemetry that will let you know if something’s about to happen.
0:22:10 You could implement that predictive failure analysis with a traditional data science or machine learning
0:22:11 approach.
0:22:17 But if you use a large language model and if you use an agent, now you have the capability
0:22:19 to interact with it in a natural way.
0:22:19 Right.
0:22:27 So that you can use that to plan the responses, be alerted, trigger all of the logistic actions
0:22:29 that might come into place to circumvent that failure.
0:22:29 Yeah.
0:22:31 So I would say those are probably the three.
0:22:39 There are very few unique use cases for LLMs, but the technology is so powerful that we’re
0:22:44 basically re-solving all of the solved problems in order to do it better.
0:22:44 To do it better.
0:22:53 You mentioned earlier that the lack of standardization was a challenge in adoption and development
0:22:58 when it comes to the agents in general, but also how they communicate with one another.
0:23:03 Kind of stepping back from that a little bit, are agents being used kind of more broadly in
0:23:07 cybersecurity and risk management applications?
0:23:07 They are.
0:23:12 So within NVIDIA, we do a lot of work to make sure our software is secure.
0:23:19 And we use AI, we use AI throughout NVIDIA, we use AI to design our GPUs, we use AI to
0:23:22 write our software, we use AI to make sure our software is secure.
0:23:26 And then we will often package those approaches and those learnings as blueprints that we can
0:23:27 re-deliver to our customers.
0:23:35 And one example of that is that we have a computer vulnerability assessment and analysis pipeline,
0:23:39 so that if we have a bit of software and we’re alerted that there’s a vulnerability in
0:23:45 the software, the AI agent will actually look and see if our code paths that trigger that exploit are
0:23:45 executed.
0:23:52 And it will make an assessment of how much risk we’re exposed to, and it will also recommend
0:23:55 how to remediate the problem.
0:24:02 And so that assists our human worker, Christina, who does all of this work, and it should make
0:24:03 her work more efficient.
0:24:09 So it’s a great example of how, you know, a very applied thing that’s potentially error
0:24:16 prone, but important, where we have an expert human who is basically using the agent to give
0:24:17 her more leverage.
0:24:18 Right, right.
0:24:23 So kind of as we start to wrap up here, advice for listeners, I’m sure there are listeners
0:24:29 who are in enterprise situations and thinking about, you know, how do we start, how do we
0:24:35 start designing and developing deployings down the line, but thinking about agentic AI for
0:24:39 whatever the use case is, you know, for where they’re at, what advice would you give them?
0:24:45 Are there broad advice right now, best practices for designing, developing, deploying agentic AI
0:24:47 in these situations?
0:24:51 So this is the advice I would give to anyone.
0:24:57 And in fact, I try to follow it myself and I give it to my team, is that soon this will
0:24:59 be the way everyone does everything.
0:24:59 Right.
0:25:04 We have to start getting familiar with these tools and these approaches, just broadly.
0:25:05 So are there any that you like?
0:25:12 Do you have, you know, consumer facing in a broad sense, but AI tools that you’re using regularly
0:25:16 at work, not at work that, you know, you’re just, you’re a fan of?
0:25:23 Yeah, there’s, so NVIDIA, of course, we have many partners and I use a lot of their technologies
0:25:24 in my personal life.
0:25:32 In fact, I recently presented to a group of energy investors and energy professionals about
0:25:33 the sustainability of AI.
0:25:39 And I used a lot of the tools, the blueprints that my team built to help me prepare for that
0:25:40 conversation.
0:25:46 So for example, we have something on our blueprints page called PDF to podcast, and you
0:25:51 can give it a bunch of PDF documents and it will generate an engaging monologue or a dialogue.
0:25:53 It could be a conversation or a debate.
0:25:54 Right.
0:25:58 So that you can listen to it in your car and familiarize yourself with the content.
0:25:58 With the material, yeah.
0:25:59 Right.
0:26:03 So this, the entertainment value is not what you get, you know, here clearly.
0:26:04 Right.
0:26:05 But it’s useful.
0:26:06 You’re all weak, folks.
0:26:06 Yeah.
0:26:07 No, it is though, right?
0:26:11 Because you can then, you can, you know, we all have earbuds in our ears all the time anyway.
0:26:14 So why not be learning what you need to learn for your next meeting?
0:26:15 Right.
0:26:19 And so another tool I really like is Perplexity Pro.
0:26:25 Perplexity is, it functions a bit as a certain search engine, but it’s also agentic in that
0:26:27 it will generate net new content.
0:26:30 It’s not finding content, it’s generating new content.
0:26:33 So you can ask it questions.
0:26:40 For example, when I was researching sustainability, I asked it to develop a report for me on trends
0:26:45 between the top 500 list of the world’s fastest supercomputers and the green 500 list of the
0:26:47 world’s most energy efficient supercomputers.
0:26:47 Right.
0:26:50 And are there any crossovers and what are the trends you’re seeing?
0:26:50 Yeah.
0:26:52 And it prepared, that didn’t exist anywhere.
0:26:55 It made that report for me.
0:26:55 Net new for you, yeah.
0:26:56 Yep.
0:26:59 And for the developers, you know, I think Cursor is very powerful.
0:27:05 Cursor will kind of give you a development environment where you have an AI assistant that
0:27:09 you can interact with in natural language and it’s kind of a visual basic plugin and it will
0:27:10 assist you as you work.
0:27:14 So I’d say those are, those are three that I use pretty much every day.
0:27:14 Fantastic.
0:27:22 Jacob, for listeners who want to learn more, Gentic AI, Gentic AI and the enterprise, the
0:27:25 work your team is specifically doing, where would you send them?
0:27:29 Are there places online, specific part of the NVIDIA site, social media, where should they
0:27:29 go?
0:27:36 I think a great place to start is to go to build.nvidia.com slash blueprints.
0:27:43 Most of the Gentic AI workflows that we create have interactive demos.
0:27:45 I don’t even know if you can call them demos.
0:27:48 We deploy them in our own cloud so you can experience them firsthand.
0:27:51 And we do not have a consumer focus.
0:27:52 We have a very enterprise focus.
0:27:58 So we have use cases that you won’t see anywhere else, like training fleets of robots, building
0:28:01 wind tunnels entirely in simulation.
0:28:05 So it’s a very cool place to hang out and get started.
0:28:12 We also have a capability where if you’re experimenting with the interactive demo and you actually want
0:28:19 to spin up a deployment of that thing in your own VPC, you can enter your NVIDIA developer
0:28:20 API key.
0:28:26 You can enter your, you know, .pemp credentials for your VPC and you can spin up a virtual machine
0:28:27 with the Blueprint pre-deployed in your network.
0:28:28 Very cool.
0:28:29 You can bring your data to it.
0:28:30 You can customize it.
0:28:32 And again, all of these things are open source.
0:28:33 So we also have them available on GitHub.
0:28:39 Jacob Lieberman, thank you so much for taking the time out of GTC week to join the podcast.
0:28:43 Agentec AI, obviously hot topic right now, but that’s underselling it.
0:28:45 Like you said, this is where it’s all heading.
0:28:47 It’s how we’re going to be doing things.
0:28:49 So no better time than the present to get started.
0:28:50 Thank you.
0:29:32 Thank you.
0:29:33 Thank you.

Jacob Liberman, Director of Product Management at NVIDIA, discusses how agentic AI is transforming enterprises by automating complex tasks and enhancing human capabilities. Learn how NVIDIA Blueprints are making it easier for developers to deploy these intelligent agents and drive real business value.

Leave a Reply

The AI PodcastThe AI Podcast
Let's Evolve Together
Logo