NVIDIA’s Marco Pavone on AI Simulation, Safety, and the Road to Autonomous Vehicles – Ep. 260

0
0
AI transcript
0:00:16 Hello, and welcome to the NVIDIA AI Podcast. I’m your host, Noah Kravitz. If you’ve been
0:00:20 in San Francisco, Phoenix, or a number of other cities recently, no doubt you’ve seen
0:00:25 cars navigating the streets without anyone in the driver’s seat. Robo-taxis and other
0:00:29 autonomous vehicles are becoming a part of our lives, whether through prototype testing
0:00:34 on public roads or driverless cars ferrying passengers across city streets. Autonomous
0:00:39 vehicles hold a lot of promise, but first and foremost, they have to be safe. This is
0:00:44 where advanced simulations powered by AI can help. Here to explain the ins and outs of autonomous
0:00:49 vehicle safety and simulation, and why it’s so important to all of us going forward, is
0:00:54 Marco Pavone, Senior Director of Autonomous Vehicle Research at NVIDIA and Professor at Stanford
0:00:59 University. Marco, welcome, and thank you so much for taking the time to join the NVIDIA
0:01:05 AI Podcast. Thank you for having me. So Marco, if you don’t mind, could we start by having
0:01:10 you tell us a little bit about what your role is at NVIDIA and Stanford as well, if applicable,
0:01:16 when it comes to autonomous vehicle research, safety, simulation, and everything else we’re
0:01:24 going to talk about? Sure. At NVIDIA, I lead autonomous vehicle research, and our research
0:01:33 spans a number of different research tasks. First of all, we work on developing new architectures for
0:01:40 autonomous systems. We work on developing AD foundation models that can empower the full
0:01:46 development lifecycle, the full development program from the onboard stack, all the way to simulation,
0:01:55 data curation, and even safety evaluation. We work on enabling high-fidelity behavior and sensor simulation,
0:02:04 and we also work on developing tools to ensure the safety of AI-based stocks. At NVIDIA, I focus primarily on
0:02:12 ground robotic systems, namely self-driving cars, while at Stanford, I focus primarily on autonomous
0:02:18 systems. Oh, fascinating. We’ll have to have you back to do an episode on the potential future of
0:02:24 flying cars, putting those two things together. So in the introduction, I mentioned robotaxis. It was
0:02:28 something that came to mind for me as an easy way to kind of visualize what we’re talking about. But as I
0:02:34 understand it, your purview and what NVIDIA is working on more broadly isn’t just fully self-driving
0:02:40 vehicles. There’s more to it than that. Yes. The field of autonomous vehicle is very diverse,
0:02:49 both in terms of the technology from semi-automated systems, all the way to fully automated systems,
0:02:56 like for example, robotaxis, and also in terms of application domains, personal mobility, namely
0:03:05 self-driving cars for people moving in cities, all the way to freight transportation, autonomous vehicles
0:03:12 for agriculture, for construction, and more. Right. Excellent. So let’s start maybe by talking about
0:03:18 HALOS. NVIDIA recently announced, well, I’m going to leave it to you, something called HALOS. Can you tell
0:03:24 us about it and how it relates to AV safety? Yes, of course. So safety obviously is paramount for
0:03:33 autonomous systems. They are safety-critical systems. And HALOS plays a very important role in this regard.
0:03:41 So specifically, HALOS is a full-stack, comprehensive safety system for autonomous vehicles that unifies
0:03:48 safety elements from a vehicle architecture all the way to AI models. And in particular, HALOS comprises
0:03:55 hardware and software elements, tools, models, as well as design principles for combining them to safeguard
0:04:03 AI-based end-to-end AV stacks. The hardware and software, is this in vehicle, in the cloud? Does it span both?
0:04:10 As I mentioned before, it spans everything from the vehicle architecture. So basically, the hardware and
0:04:16 software goes into the car, all the way to the development processes that happen in the cloud
0:04:19 to build a safe AI system. Right.
0:04:28 For example, HALOS comprises the World of First Safety Assets platform for AI-based AV stacks. So this
0:04:35 goes into the vehicle. And it also comprises dedicated MLOps workflows for safety data curation. So as you can
0:04:42 tell, it’s a very broad program. Very comprehensive. Yeah, great. Okay, so let’s talk about AV safety. This is one of
0:04:48 those questions that I have a million answers to in my head as I ask it, but I’m sure none of them hit the mark. But why
0:04:54 is safety important for autonomous vehicles? Well, autonomous vehicles are safety critical. So the
0:05:03 consequences of a mistake, of course, can be catastrophic, including loss of life. So obviously, you have to make
0:05:11 sure that these systems do not pose an unreasonable risk to society. One, because this is the right thing to do. And
0:05:16 that too, because otherwise, they will not be accepted by society. Right. So this is the reason why
0:05:25 NVIDIA, who is developing its own AV program, but is also working on developing an ecosystem where
0:05:32 multiple stakeholders can develop their own AV programs, is so invested in pushing safety from
0:05:37 vehicle or the way to the cloud. So when we dig into this a little bit, what are some of the key
0:05:44 considerations when it comes to ensuring AV safety? Yeah, so I like to say that there is no silver bullet
0:05:54 to ensure AV safety. There is no single technology or single process that is going to make your system
0:06:03 really safe. Safety really has to characterize the full development program from design time, making sure
0:06:11 that, for example, we train the AI systems on safe driving behaviors, all the way to runtime and
0:06:20 deployment time. So once the system is deployed on the road, building monitors that, for example, keep in check
0:06:33 the AI system, or building guardrails, all the way to the iterative development of the system. So basically, how you learn from your deployments to increase the safety of the system.
0:06:38 And throughout its lifetime, potentially over the years, when the system is deployed.
0:06:43 As I understand it, and mine is very much a layperson’s kind of working understanding, but
0:06:50 one of the things, or perhaps the overarching thing that makes developing autonomous vehicles so challenging,
0:06:56 is that the conditions that they’re dealing with are literally unpredictable. They, you know, things,
0:07:02 weather, driving conditions, the conditions of the roads, a kid on a bike darting out across the
0:07:08 the street, all kinds of factors that, you know, you have to account for, even if you can’t actually imagine them ahead of time.
0:07:24 But can you talk about some of the common challenges that developers face at different stages of the life cycle when developing AVs, and how they make sure that safety, drivers, passengers, pedestrians, everybody is prioritized?
0:07:30 I mean, you’re exactly right. So by definition, an autonomous system is a robotic system that
0:07:38 that operates in scenarios that were not foreseen at the design time and requires some level of reasoning.
0:07:48 So how do you ensure the safety with respect to conditions that potentially might be quite different from those seen at design time when the system was developed?
0:07:55 And as I said before, there is no silver bullet. It goes down to fundamental principles of safety engineering.
0:08:08 So making sure that we build a stack that is diverse, meaning that comprises components that are different and with overlapping responsibility.
0:08:27 It goes down to the principle of monitoring, meaning that you have to constantly monitor, if you will, the health of your system to make sure that, for example, it’s not operating in situations whereby you cannot assure anymore the trustworthiness operation.
0:08:34 So basically, the system needs to understand when it is outside its domain of expertise.
0:08:39 And of course, it boils down to the principle of testing and validation.
0:08:50 So how do we robustly test the system in a way that covers as much as possible all the most important situations that you might face at the point in time?
0:08:59 Right. And are there accepted standards or benchmarks? Or how do you determine when a system is deemed safe?
0:09:04 So the level of safety really depends on the product we’re talking about.
0:09:05 Okay.
0:09:17 So obviously, a semi-automated system whereby the human is in charge of monitoring the system itself has lower safety requirements than a robot access.
0:09:20 Right. Because the human can intervene if need be, yeah.
0:09:27 Yeah, exactly. So there is a continuum spectrum of safety requirements depending on the level of automation.
0:09:41 And there are well-understood processes to go from an analysis of the potential risks and consequences to requirements on the stack, for example, in terms of mean times between failures.
0:09:46 Then how basically you achieve those, again, there is no silver bullet.
0:10:08 It goes back to making sure that you have a safe design, for example, reflecting the principle of diversity, that you have a robust data set curation pipeline to make sure that you train your system on safe behaviors, to make sure that you have a robust runtime monitoring pipeline that keeps the system in check, as well as a very strict test regimen to test your system.
0:10:15 And using a combination of both real-world data, you still need that, even though that is expensive, and also simulation data.
0:10:22 And this is something that is becoming increasingly available to developers, which I think is very exciting.
0:10:36 So to talk a little more about simulation, I mentioned in the introduction, prototype vehicles testing on public roads, city streets, and all of that, but it sounds like that’s not the only way to test and validate the safety of these systems.
0:10:38 How do simulations work in this context?
0:10:44 I mean, simulation has always been the holy grail of robotics, more broadly.
0:10:50 If we had a perfect simulator, robotics development would be dramatically accelerated.
0:10:54 And of course, perfect simulator doesn’t exist, at least yet.
0:11:12 But in the past two, three years, the robotics community broadly has made a number of breakthroughs in the field of simulation, both in terms of how we simulate the environment from a sense of realism standpoint.
0:11:19 So basically rendering images and environments in a way that is ultra realistic.
0:11:26 It will be very hard for you to tell the difference between a computer-generated image and actually a real image.
0:11:36 And also in terms of technologies to replicate how other agents, for example, bikers or pedestrians, would behave on the road.
0:11:45 So all these technologies together have basically enabled simulation capabilities that even two, three years ago were not even imaginable.
0:11:48 There’s been quite a few years in this industry.
0:11:49 Yes.
0:11:57 And now simulation is more than ever part of the development process, for example, for testing.
0:12:08 If you have a new design, you want to make sure that you improve according with respect to a number of metrics in a number of different operational design domains, simulation is a great tool for that.
0:12:24 Using simulation for validation is still something that is a bit creaky because it means that you’re not just interested in directional information, that is whether something is better than something else, but you’re also very interested in the absolute numbers of your metrics.
0:12:28 And so the question is, how do you trust your absolute numbers?
0:12:50 Well, for example, we’ve been doing research on this topic on developing statistical methodologies to provide the confidence bonds on the metrics that are generated by the simulator to help developing a validation case that comprises both tests on real data, as I said before, you need that.
0:12:57 I don’t see there is a way around it, but also simulation data, thereby dramatically decreasing the cost of validation.
0:13:14 Are there specific, I guess I’m thinking of both points in the development process, but then also situations that a vehicle, I’m thinking about it as a driver, but a vehicle might encounter, that simulation has proved particularly adept at?
0:13:36 So simulation helps both with respect to testing against nominal conditions, like for example, whether you have good performance when negotiating an intersection and so on and so forth, but also with respect to anomalous conditions that might be very hard to see in real life.
0:13:53 And one recent work that I think is very much related to your question was to use large language models to recreate crash scenarios that have happened in real life.
0:14:16 So basically mining the police reports that have been written over the years because of crashes, and then using a large language model to interpret them, adds a little bit of generative AI, and then recreating scenarios that will be very hard to recreate just by asking designers and artists to do so.
0:14:23 So to recreate those scenarios, but in a way that are still plausible, it’s very easy to simulate very challenging scenes.
0:14:28 The question is, are those things that can really happen, are they plausible?
0:14:35 By basically grounding the generation in crash reports, you have a direct grounding on those scenes.
0:14:36 Oh, that’s fascinating.
0:14:39 And we’ve seen how actually these scenes can improve in both testing and the validation.
0:14:41 So this was a long answer.
0:14:43 The short answer to your question is yes.
0:14:55 We’ve seen situations where we were able to generate scenarios that would have been very hard to generate otherwise without our AI-based workflows, and they have helped with improving the performance of the system.
0:14:55 Right.
0:14:56 That’s fascinating.
0:15:00 It’s, you know, one of those things that when you look at it, you’re like, oh, that’s such a simple idea.
0:15:02 It makes so much sense, but very clever.
0:15:03 It’s interesting.
0:15:12 How does simulation, AV simulation for autonomous vehicles, how does that differ from traditional automotive simulation or even other methods of simulation?
0:15:13 Yes.
0:15:25 So AV simulation has some features that make it particularly challenging and some other features that actually make it simpler than other domains, like, for example, robotics.
0:15:42 One feature that makes a V simulation, particularly in the context of a high level of automation, so semi-autopilots or robotaxes, is that you have to reason about the interactions with other agents on the road.
0:16:05 So, yeah, you have to be able to do that because you have to be able to do that in a way that is highly dynamic, so basically it reflects the physics of an interaction, but also the behavioral dynamics.
0:16:15 So how, for example, a biker might respond to what an autonomous car is doing, sort of a ballet between the two agents.
0:16:27 So that part is very hard, but it’s very important if you really want to push the utility of a simulation, particularly for higher levels of automation.
0:16:42 For more basic levels of automation, like, for example, collision avoidance or lane keep assist functions, you don’t need, you don’t have such a requirement on modern interruptions, for example, which make it simpler.
0:16:45 There are other parts, though, that make a V simulation easier.
0:17:04 Like, for example, the dynamics of a vehicle, they are relatively simple and well understood, at least as compared to the dynamics of a humanoid robotic system, that is, for example, grasping an object and then you have to reason about friction.
0:17:07 Maybe the object and maybe the object is a soft idea to reason about deformation.
0:17:08 Right, right.
0:17:13 So they’re very hard and it is not something that you have to deal with in a V simulation.
0:17:20 So I think there are features, instead of making more difficult, rather than make it a little bit simpler, but definitely it’s very hard.
0:17:21 It’s very hard.
0:17:22 Yeah.
0:17:29 How do developers navigate testing using simulations to test in controlled environments versus uncontrolled environments?
0:17:44 And, you know, you were talking about, I almost was going to ask you if you have to kind of weave some unpredictability into these other agents, you know, imagining, well, is it a competent, somebody who’s used to being on a bicycle or is it a kid just learning for the first time and will react differently?
0:17:54 But obviously, controlled environments, uncontrolled environments are both important parts of the processes, but how do they relate and how does the use of simulation kind of differ in the two?
0:18:15 Yeah. So, as I mentioned before, simulation helps both in terms of simulating, you know, nominal behaviors, which is still important, for example, to test whether a new autonomy stack is potentially regressing on some metrics, like, for example, how well and how comfortably you’re driving on a highway.
0:18:34 By the same time, you’re also interested in more rare cases, corner cases, and here is where you leverage different technologies, like, for example, for sensor simulation, there have been significant advances in the past two, three years on the topic of neural reconstruction.
0:18:46 So, basically, reconstructing a scene potentially from a different point of view to allow for new trajectories that a vehicle can execute in a simulated environment.
0:18:57 To back up just a step, Marco, I apologize, but when you say sensor simulation, does that refer to simulating based on inputs gathered by sensors, or is it something else?
0:19:06 Oh, I mean, basically, recreating a scene as it will be perceived by a camera or a ladder and so on.
0:19:07 Okay, thank you.
0:19:17 So, basically generating, for example, images that are synthetic, but that are very close to the ones that your cameras would see at the point of time.
0:19:17 Okay.
0:19:31 So, what I was saying is that there are technologies that allow you to reconstruct very faithfully a scene, but, of course, a reconstruction technology reconstructs a scene that was observed in the real world.
0:19:37 So, basically, it’s bound to reconstruct whatever has been seen.
0:19:37 Right.
0:20:00 So, that’s where other new technologies related, for example, to video generation, like, for example, the Cosmos model that NVIDIA has produced, come to the rescue or, you know, help because it provides the capability of generating new scenes, something that is completely different from what has been seen in the real world.
0:20:10 Maybe in a conditional fashion, so these models can be conditioned, for example, in text prompts, to create the scenarios that are deemed of particular interest.
0:20:17 Like, for example, I want to simulate a kid that is chasing a ball on a crosswalk.
0:20:19 And now we have the technology to do that.
0:20:19 Right.
0:20:33 And I would imagine the models and these systems have guardrails that, to speak to what you mentioned earlier, that kind of make sure that the scene being simulated is something that could actually happen in the physical world and not out of those mountains.
0:20:34 Or at least plausible.
0:20:35 Or at least plausible.
0:20:35 Right.
0:20:36 Plausible is a better word.
0:20:45 That’s one of the reasons why there is substantial effort at NVIDIA and in the community in general to make sure that the simulation is so-called controllable.
0:20:46 Right.
0:20:48 Or in other words, you can have cabinets with.
0:20:49 Right.
0:21:06 We’ve talked on the podcast recently about digital twins and the role of digital twins in industry and industrial AI and being able to simulate, well, all kinds of things happening in a factory environment, say, before actually trying to construct or deploy them physically.
0:21:33 Well, in a way, an AV simulator is a digital twin of a city, an interesting digital twin, because, as I said before, differently from other domains, for example, the domain of industrial robotics or the aerospace domain, in the urban digital twin, you also have to model humans.
0:21:33 Right.
0:21:54 But in general, yes, digital twins are revolutionizing a number of fields, including aerospace, which is my other passion, by providing unprecedented tools for designing aircraft systems completely or almost completely in simulation, along with the control laws to control them.
0:21:58 And actually, I do have some projects along those lines that are also quite exciting.
0:22:02 Not to take us off course, but I don’t know if you can speak to this briefly.
0:22:03 It might be an unfair question.
0:22:16 But how different is it simulating, running simulations, working on AVs, cars, vehicles on the ground, as opposed to working on, I don’t know if you call them AVs, but in the aerospace industry?
0:22:21 Well, in aerospace, depending on what you want to do.
0:22:34 So depending on whether you want to test new design, or maybe you want to test new control laws, or maybe you want to test operations, of course, you have different types of simulators.
0:22:49 But in general, one important aspect of the simulator is that of accurately modeling the aerodynamics of the system, which, particularly in the case of aircraft design, has to be extremely accurate.
0:23:06 And today, thanks to advances in AI, we can run simulations where we have very detailed simulation of the airflow around the aircraft in a fraction of a time that was even possible a couple of years ago.
0:23:22 And interestingly, I do have a project whereby we try to develop a sort of mini-digital twin that can even be run online and used for the purposes of control to control highly flexible aircraft.
0:23:28 So these are basically unmanned aircraft that tend to tend to be much more lightweight than regular aircraft.
0:23:34 And so they have deformable modes, meaning that the aircraft can actually, the wings can actually deform a little bit.
0:23:34 Right.
0:23:42 And so you want to have a digital twin that allows you to predict how your body might deform as a consequence of your control actions.
0:23:44 Well, now we have the technology to do that.
0:23:44 Yeah, amazing.
0:23:46 I’m speaking with Marco Pavone.
0:23:52 Marco is the Senior Director of Autonomous Vehicle Research at NVIDIA, and also a professor at Stanford University.
0:24:02 And in addition to, well, as we’ve been talking about, in addition to his work on autonomous vehicles, cars and the like, also working in the aerospace realm at the same time.
0:24:13 And Marco, you alluded to this, well, throughout the conversation, but just now again, the past few years have just been breakthrough after breakthrough in AI, in generative AI in particular.
0:24:19 But how are these breakthroughs in generative and also physical AI influencing AV simulation now?
0:24:20 In multiple ways.
0:24:36 So breakthroughs in terms of reconstructing environments, for example, visually, in a way that is very difficult to distinguish from the real world, at least to a human eye.
0:24:49 Breakthroughs in terms of generating completely new scenes and scenarios by harnessing, for example, the power of video generation models.
0:25:04 Breakthroughs in terms of using large language models or other foundation models to be able to parse human knowledge in terms of potential crash scenarios to recreate those in simulation.
0:25:05 Right, right.
0:25:11 And then also breakthroughs in terms of modeling the behaviors of humans in simulation.
0:25:13 So it’s not just one single breakthrough.
0:25:20 It’s the convergence of multiple breakthroughs that, interestingly, have all happened in the past really few years.
0:25:35 It’s interesting listening to you talking and thinking like, wow, you see so much of the world when you take a drive in a car somewhere, you know, and all of these different elements you’re talking about needing to simulate to really reconstruct a situation and test and validate for safety.
0:25:41 You know, you see it, you see it, the natural world, the changing road conditions, the kids and the animals and all of that.
0:25:50 So it’s fascinating to hear you talk about the ways those come together by virtue of these AI breakthroughs that allow us to model these different things.
0:26:11 Yeah, and interesting what you’re saying, because one of the things that these breakthroughs allow you to do is to tap into highly heterogeneous data, meaning that traditionally people were using data collected by a fleet of vehicles owned by a company developing the system.
0:26:25 But now with these new foundation models, like video generation models, vision language models, blood language models, and so on, we can actually tap into internet scale knowledge.
0:26:31 For example, we can use videos that have been recording through dash cams, maybe by taxis.
0:26:50 And all of a sudden, instead of relying only on the data gathered by 1,000 vehicles that are part of your fleet, then you can collect, you know, enormous amount of driving hours, not only incredible, not only in terms of the magnitude, but also in terms of the diversity, because of course, humans across the world are driving.
0:26:53 So you’re able to collect data across the world simultaneously.
0:27:01 So this is yet another opportunity that has been unlocked by the most recent AI breakthroughs.
0:27:09 So along those lines, what are some of the other potential opportunities you see that world foundation models are opening up for AV simulation?
0:27:17 So, so far, we have discussed the use of world foundation models, mostly for the purposes of the simulation.
0:27:22 And that’s absolutely a very important use case.
0:27:23 But then you can think a bit more.
0:27:50 So if you have a model that can really predict what will happen in the future, or at least in the next few seconds, with a high level of fidelity, both in terms of how the environment will look like, and in terms of, you know, what the other agents will do, it means that your model has implicitly learned a very powerful model of how the world works.
0:27:59 And so you can use that understanding as the basis for policy construction, for basically building the AI that goes into the vehicle.
0:28:13 So I think that another opportunity that is right now still under development is to distill down this extremely powerful knowledge in a way that can be used also as part of the onboard stack.
0:28:16 Vida is working heavily on it.
0:28:19 And of course, other people and everybody’s committed are also working heavily on it.
0:28:25 This is going, I think, to be a lead potentially to another substantial breakthrough.
0:28:26 Right. Amazing.
0:28:41 So you’ve touched on this a little bit in the conversation here and there, but are there any research projects that you’ve worked on recently or still working on now related to safety and SIEM that might be helpful for the conversation, for the audience to hear about?
0:28:43 So, yeah, so yeah, multiple projects.
0:29:08 So one of them I’m particularly excited about and that I alluded to at the beginning is how do we best leverage simulated data and the real world data to provide metrics with rigorous confidence bounds so that we can really trust them and use for validation purposes.
0:29:22 Then we are doing research on how to curate data to ensure that the data is, you know, safe so that you only learn on safe behaviors.
0:29:25 And you might think that defining safety is easy.
0:29:27 You don’t want to collide.
0:29:30 But actually, safety is much more nuanced than that.
0:29:34 So think about how you may want to teach a kid how to drive.
0:29:43 You start saying, well, you shouldn’t have put yourself in that situation because if the other driver were to be inattentive, then you might have collided.
0:29:52 So how do we capture all those potentially dangerous situations and distill them down into a data set that we can use for data curation and testing?
0:29:55 So this is another area that is very interesting.
0:30:01 And the third one is how do we automate safety evaluation?
0:30:06 As I said, the safety assessment, understanding what the reasonable risk is, is complicated.
0:30:10 Of course, we can rely on human judgment, but that doesn’t scale.
0:30:16 So how we can actually train the separate AI models that can help with the safety evaluation?
0:30:21 So that’s another interesting project we have currently ongoing that we hope will bring real value soon.
0:30:35 And so along those lines, looking ahead in the next couple of years, five years, maybe it’s more like 10 years, where do you see autonomous vehicles and specifically talking about safety and simulation?
0:30:37 But you can go more broad than that.
0:30:43 Where do you see the technology, the industry, society’s relationship with AVs headed?
0:30:56 So I think that in terms of semi-automated vehicles and largely automated vehicles with a high confidence, this technology will be pervasive in our society within the next five years.
0:31:06 We already have companies that are productizing these systems, multiple companies that are doing so, and it’s delivering real value.
0:31:08 So I’m quite confident in this statement.
0:31:20 In the case of robotaxis, the estimate gets a little bit more difficult in the sense that it’s in a situation whereby we do have a technology proof.
0:31:28 I mean, we do have robotaxis, for example, from Waymo that are driving around in San Francisco without a safety driver very safely.
0:31:32 And so that is a marvel, if you will, of engineering.
0:31:41 But the question is, when will this technology be able to scale up to the level that also becomes a profitable technology?
0:31:45 So we’ll say it is more of a business question at this point than a technology question.
0:31:49 Of course, to become profitable, you also need to have technology improvements.
0:31:51 So that is a bit harder to predict.
0:32:02 But given the progress in the past few years, definitely, and especially given the progress by Waymo and others on deploying robotaxis,
0:32:13 it stands to reason that within the next five to ten years, many cities in the United States, in China and elsewhere, will have robotaxis installments.
0:32:20 To what extent personal mobility will be taken over by robotaxis is still very hard to predict.
0:32:20 Sure.
0:32:24 Not just technological, it’s also a business consideration.
0:32:24 Right.
0:32:26 And of course, there are many forces at play.
0:32:33 Just a personal question, and perhaps I should know this, but I’ve been behind next to some of those Waymo vehicles in San Francisco.
0:32:39 The apparatus on top that’s spinning constantly, is that a camera or what is that?
0:32:48 So Waymo basically, as basically all other highly automated vehicles, has a range of sensors.
0:32:50 So cameras is one of them.
0:32:54 But then another important sensor is referred to as lidars.
0:33:02 So lidar sensors are laser-based sensors that measure distance as a function of time of flight.
0:33:09 And so they allow you to very accurately characterize the distance from objects in a scene,
0:33:17 and also provide a redundant way of perceiving the environment, which, as I said before, is one of the key principles of safety.
0:33:20 Redundance is one, diversity is one of the key principles of safety.
0:33:27 So if, for example, a camera has a problem, you have a lidar that might supplement the pitfalls of the camera.
0:33:33 Right. So yes, a Waymo system has a bunch of lidar sensors and camera sensors and so on.
0:33:34 Got it.
0:33:38 Marco, this has been fascinating and so many different elements to think about.
0:33:42 And for listeners who want to dig in further, would like to know more,
0:33:50 where can they go online on the NVIDIA website somewhere else to find out more about AV safety,
0:33:53 maybe about Halos, any of the other things you’ve been talking about?
0:34:00 Yeah. So at the last NVIDIA GTC conference, we had an AV safety day.
0:34:10 So it was a program, a three-hour program, where we went in great detail on all the different safety elements that comprise Halos.
0:34:18 So the recording of the presentations, along with the slides, are available on the NVIDIA GTC webpage.
0:34:18 Yep.
0:34:26 And then there is an NVIDIA Halos webpage that people can refer to in order to learn more about Halos.
0:34:32 So just type NVIDIA Halos on your favorite browser, and then this page will pop up.
0:34:33 Easy enough.
0:34:37 Marco Pavoni, again, thank you so much for taking the time to speak with us.
0:34:44 Really just a, I keep saying this, but a fascinating look into all of the different things that go into developing these systems.
0:34:49 And as you put it, all of the nuances that go into defining what safety is.
0:34:51 All right. Thank you so much for having me.
0:34:51 Thank you.
0:34:51 Thank you.
0:34:51 Thank you.
0:35:26 Thank you.
0:35:27 Thank you.

In this episode of the NVIDIA AI Podcast, Dr. Marco Pavone, Director of Autonomous Vehicle Research at NVIDIA and Professor at Stanford University, joins us to discuss the cutting-edge technologies making autonomous vehicles safer than ever. Learn how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development, and reducing real-world risks. Dr. Pavone also shares insights on the latest advances in generative AI and foundation models, and its impact on autonomous vehicle innovation—from city streets to aerospace.

Learn more at: ai-podcast.nvidia.com

Leave a Reply

The AI PodcastThe AI Podcast
Let's Evolve Together
Logo