AI transcript
0:00:00 [MUSIC]
0:00:10 >> Hello, and welcome to the NVIDIA AI podcast.
0:00:13 I’m your host, Noah Kravitz.
0:00:15 We’re coming to you from GTC 2024 in San Jose, California,
0:00:20 and we’re here to talk about nerfs.
0:00:22 No, not foam footballs and dart guns,
0:00:24 but neural radiance fields.
0:00:27 What is this kind of nerf?
0:00:28 It’s a technology that might just be changing the nature of images forever.
0:00:32 Here to explain more is Michael Rubloff.
0:00:35 Michael is the founder and managing editor of radiancefields.com,
0:00:39 a news site covering the progression of radiance field based technologies,
0:00:42 including neural radiance fields, aka nerfs,
0:00:45 and something called 3D Gaussian Splatting that I’ll leave to Michael to explain.
0:00:50 Michael, thanks so much for taking time out of GTC to join the AI podcast.
0:00:54 >> Of course, thank you so much for having me.
0:00:55 So first things first, goofy football jokes aside, what is a nerf?
0:01:00 What does that mean?
0:01:01 >> Yeah, so essentially, you can think of a nerf as they allow you to take a series
0:01:06 of 2D images or video, and what you can do from that is you can actually
0:01:11 create a hyper realistic 3D model.
0:01:13 And what that allows for is once you have it created, it’s like a photograph,
0:01:17 but it’s perfect from any imaginable angle.
0:01:20 Your composition is no longer a bottleneck.
0:01:22 You can do whatever it is that you would like with that file and it will look lifelike.
0:01:27 >> So if I were to take, I don’t know how many, two, three,
0:01:32 five pictures of the two of us sitting here right now in this podcast room,
0:01:37 if you will, I could put those together into a nerf.
0:01:42 And then I would have a file that I can look at from different perspectives.
0:01:48 I can sort of move through.
0:01:50 How does that work from the user experience side?
0:01:53 >> Yeah, so typically, the recommended amount is somewhere between like,
0:01:57 I’d say, 40 to 100 images.
0:01:59 It’s really easy from a video because then you can just slice up
0:02:03 individual frames from that video.
0:02:05 But there are some methods actually that are going all the way down to three
0:02:09 images and it’s actually able to reconstruct, which is just mind blowing.
0:02:13 >> I’m too used to few shot and zero shot learning and things like that.
0:02:18 So I’m like, one image.
0:02:19 >> Yes, they are getting there.
0:02:20 And there’s been a ton of amazing work, one called Reconfusion from Google,
0:02:25 which is just shocking.
0:02:26 It can go down as far as three images and it’s still very compelling.
0:02:31 But yeah, so once you actually have gone ahead and taken your images or video,
0:02:36 you would run it through a Radiance Field Pipeline,
0:02:38 whether that’s a nerf-based one or a Gaussian-spotting-based one.
0:02:41 There are several cloud-based options where it’s just drag and drop your images
0:02:45 and it does all the work for you.
0:02:47 And the resulting image or resulting file, yeah, you have autonomy over it.
0:02:52 And you can kind of experience that whatever you’ve captured from whatever
0:02:57 angle that you would like or whatever your use case might be for it.
0:03:01 >> How are they created without getting too technical about it?
0:03:07 Can you kind of give an overview of kind of what’s going on behind the scenes to
0:03:10 put these together?
0:03:11 >> Sure, so once you have your initial images,
0:03:14 the first step on both nerfs and Gaussian-splatting is running it through
0:03:18 something called structure from motion,
0:03:19 where essentially you’re taking all the images and kind of aligning them in a
0:03:23 space with one another.
0:03:24 So that’s kind of taking a look, saying like if image X is over here and
0:03:28 image Y is over here, here’s how they overlap and converge with one another.
0:03:31 And so that’s kind of the baseline approach for both methods.
0:03:34 And from there, they each have their own training methods where nerfs have
0:03:39 a neural network involved in the training of them,
0:03:41 whereas Gaussian-splatting uses a rasterization.
0:03:44 >> Okay, and I guess that kind of begs the question of beyond what you just said or
0:03:48 maybe that’s it.
0:03:48 But what’s the difference between a nerf and a Gaussian-splatting?
0:03:52 >> Yeah, so nerfs essentially were created first.
0:03:55 They were found as I joined effort between the University of Berkeley and Google.
0:04:00 >> Okay.
0:04:00 >> So nerfs have a implicit representation for them and
0:04:06 they’re trained through a neural network.
0:04:07 So there’s a lot of work being done to get higher and
0:04:11 higher frame rates associated with that.
0:04:13 Whereas Gaussian-splatting uses just direct rasterization.
0:04:16 And so you’re able to have a much more efficient rendering pipeline where
0:04:21 you can really easily get 100 plus FPS and
0:04:25 you can use them with a lot of different methods as well.
0:04:27 So they’re very compatible with 3JS and React 3 Fiber where you can see them
0:04:33 being used in website design now and being on platforms like Spline.
0:04:36 >> Very cool, and so that kind of gets into the next question,
0:04:39 which is how are they being used now?
0:04:40 I saw some examples on, I don’t know if it was the NVIDIA developer blog.
0:04:45 And they used a kind of an obscure song that my wife and I like a lot.
0:04:50 So it was perfect, right?
0:04:51 But it was a nerf, I believe, of a couple walking down memory lane.
0:04:56 They were walking outside in the foliage around them.
0:05:00 And you were able to, I was able to sort of zoom around from different points of
0:05:03 view, see their front, see their back, look at the trees, that kind of thing.
0:05:07 But beyond sort of a demo seen like that,
0:05:10 how are nerfs being used out in the world?
0:05:11 >> Yeah, so that specific one actually is of my parents that I took.
0:05:14 >> Oh, it’s your parents, that’s very cool.
0:05:16 >> And so I took, because that’s one of the major use cases for me.
0:05:18 I want to be able to document my life not only in two dimensions, but
0:05:22 I want to have a hyper realistic three dimensional moment in time frozen.
0:05:26 And so for me, that’s one of the personal use cases.
0:05:29 But on a more commercial basis, where you’re seeing a lot of the early adoption
0:05:34 is in the media and entertainment world as well as the gaming world too.
0:05:39 So for instance, Shutterstock has been putting together a library
0:05:43 of Radiance Fields where essentially what you’re able to do.
0:05:47 So say, hypothetically, you are wanting to film a Grand Central Terminal.
0:05:52 But you cannot afford to shut down all the traffic and all the trains and
0:05:55 all the foot traffic through that to film.
0:05:58 What you can do is using a Radiance Field.
0:06:01 If you capture it once, you’re able to then bring that file into Unreal Engine and
0:06:06 into a virtual production environment.
0:06:08 And then you can film infinitely.
0:06:10 And there’s no more rush outside of the actual rental rate.
0:06:13 And you can go and get the shot that you actually need.
0:06:16 And that’s where it’s starting to get adopted pretty early on.
0:06:20 And similarly, in the gaming side of things through generative Radiance
0:06:24 Fields because you are able to create these from text and images.
0:06:28 And newly video as well.
0:06:31 Now you’re able to drop these assets that take, I think the fastest method
0:06:35 currently takes about half a second to create a full 3D model.
0:06:39 And you can put that straight into one of the game engines and.
0:06:43 >> Right, and so let me sort of play this back and
0:06:46 see if I’m grasping it correctly.
0:06:48 So if I were to go to Grand Central and do my very short
0:06:53 relatively speed and my very quick shoot.
0:06:56 And come away with enough images to create a nerf.
0:07:00 I would then be able in a virtual production environment to kind of
0:07:05 create scenes or put elements into scenes from all of these different points of
0:07:09 view, not just from a single perspective.
0:07:11 Is that kind of the big?
0:07:13 >> Yes, yeah, that’s correct.
0:07:14 Where essentially you’re able to sync up the nerf to the actual camera.
0:07:17 And then you can use the full virtual production pipeline to go ahead and create.
0:07:23 >> Wow, this is something I probably should have asked you at the top.
0:07:25 So listeners, forgive me for not having a more scientific inquiry kind of way of
0:07:31 organizing my own thoughts.
0:07:32 But a radiance field, what does that term mean?
0:07:36 >> That’s a great question.
0:07:38 So essentially, you could think about a radiance field is a,
0:07:43 well, if you just break it down into the two simple words.
0:07:46 The radiance is just what that individual color would look like based
0:07:52 upon your viewing direction.
0:07:53 So say if you’re looking at like a, say a glass or something and
0:07:57 you can see that there’s an actual reflection there.
0:08:00 Depending on how you look at it and what radiance fields offer is
0:08:03 something called view dependent effects.
0:08:05 So as you move your head around, just as you do in real life, light changes,
0:08:09 light shifts and reflects.
0:08:10 And just like that, radiance fields are able to model that effect.
0:08:14 So you could think of radiance fields as the actual shift in colors at a given
0:08:17 space.
0:08:18 So knowing that that radiance at a specific point could take a look at that as
0:08:22 being radiance, whereas it’s being contained inside of a field.
0:08:25 >> Got it, okay.
0:08:26 So you said something a minute ago about generative AI,
0:08:30 using generative AI to create radiance fields.
0:08:33 And you may have used a term that slipped my mind, forgive me.
0:08:36 Is it the same basic principle as doing a text to image,
0:08:42 using a text to image model, chat GPT or dolly, stable diffusion, whatever it is?
0:08:47 Is it that same principle that I enter a text prompt and
0:08:50 then the system can create a radiance field?
0:08:53 Or is it creating a series of images?
0:08:55 Or can you get more, kind of more control than that over it?
0:08:59 >> Yeah, so exactly.
0:09:01 What it does is it will create a series of images of a singular object.
0:09:06 And actually, in some cases, they’re starting to release some papers where
0:09:09 they’re creating multiple objects.
0:09:11 But each object itself is either a nerf or, say, a Gaussian-spotting file.
0:09:16 And from that, that’s what’s actually being used to train the actual resulting
0:09:20 3D model, but they’re able to do that in a fraction of a second.
0:09:23 >> Sure.
0:09:25 So earlier this week, you hosted a session at GTC.
0:09:29 And I, unfortunately, wasn’t able to make it.
0:09:31 I wasn’t in town just yet.
0:09:33 We were talking about it before we hit record.
0:09:35 And you spoke to, if I’ve got this right, some of the artistic implications,
0:09:39 possibilities around using nerfs.
0:09:42 And then also some more kind of business enterprise oriented applications.
0:09:47 How did that go?
0:09:48 What kinds of things did you talk about?
0:09:49 And then I’m kind of curious what the audience reaction was.
0:09:52 Either that or kind of more generally, as people learn about
0:09:58 nerfs and Gaussian-splatting, what kind of the reaction is?
0:10:02 And does it spark imagination?
0:10:04 And sort of what are some of the implications?
0:10:06 >> Yeah, I was surprised by how many people actually came out to attend.
0:10:11 And so it seemed like there was an extreme amount of interest in terms of
0:10:15 just like visualizing itself.
0:10:17 So I had a roughly 20-minute video of just looping different examples of
0:10:23 Radiance Fields that I’ve created and some of the people in the community have as well.
0:10:27 And so I think there was a lot of interest across a wide variety of industries,
0:10:31 where I spoke to professors, I spoke to people working on offshore drilling sites.
0:10:36 I spoke to physicians, people of really diverse backgrounds and use cases.
0:10:41 But I think all of them will be affected by Radiance Fields.
0:10:44 >> And is the interest in some of those, because I want to ask you about sort of
0:10:48 artistic creative implications as well, but we’ll put a pin in that for a second.
0:10:52 Is the interest from, say, a physician or the offshore drilling site makes me think
0:10:57 of use cases of robots and drones to be able to go to places more safely
0:11:04 than sending a human to inspect something?
0:11:06 And is it along those lines of being able to create a Radiance Field and
0:11:12 then from a, quote, safe environment be able to inspect different aspects
0:11:17 of the offshore site from different angles?
0:11:20 Is it that kind of thing, or is it something totally different?
0:11:22 >> Yeah, no, it actually is quite similar where if you have a predetermined camera
0:11:27 path or you give the necessary information to the model,
0:11:32 it will be able to create a hyper realistic view of what it sees.
0:11:36 And from that, you can then flag for a human if they need to go and
0:11:41 take a visit for actual maintenance or repairs.
0:11:44 And so you’re able to really give a hyper realistic look for
0:11:48 that specific use case for asynchronous maintenance.
0:11:51 >> Right, right.
0:11:53 Are there implications with VR and extended reality and augmented reality?
0:11:57 >> Yes, yes, and so that was actually one of the demos that we were showing.
0:12:00 It’s just like VR applications because they still retain their
0:12:06 view dependent effects when you’re in VR.
0:12:08 And so as you move around a scene, we as humans expect light to behave in a certain
0:12:13 way, and with this, that continues to hold true.
0:12:17 And with Radiance Field Zero, you can actually walk through the entire scene.
0:12:22 And so it is, I think, the closest thing to actually stepping back into
0:12:26 a moment in time that we have.
0:12:27 >> Yeah, amazing.
0:12:29 I’m speaking with Michael Rubloff.
0:12:30 Michael is the founder and managing editor of RadianceFields.com,
0:12:34 a website that’s covering the progression of Radiance Fields based technologies.
0:12:38 And we’ve been talking about them about Neural Radiance Fields, NURFs, and
0:12:42 3D Gaussian Splatting, these techniques that allow us to stitch together 2D images
0:12:46 and create a hyper realistic 3D model, 3D environment that we can do all these
0:12:52 different things with.
0:12:53 I mentioned, wanted to ask you about some of the artistic implications.
0:12:56 And this might be a weird leaping off point, so redirect me if it is.
0:12:59 But recently, we recorded a podcast with a woman from a company called Cubrix.
0:13:06 And I’m going to get this wrong, forgive me.
0:13:07 But it’s basically kind of like a digital sound stage, sort of an advanced
0:13:13 digital green screen type thing that you can use in filmmaking,
0:13:17 video making, and powered by generative AI, kind of similar things.
0:13:22 And I remember asking her, what advice do you have for burgeoning filmmakers who
0:13:29 are interested in creating but wondering how to go about it in this age with all
0:13:34 these AI tools now becoming available and advancing so quickly?
0:13:38 And her answer was not what I expected, but it was really interesting.
0:13:40 She said, well, the first thing you should do when you’re thinking about using
0:13:44 generative AI in filmmaking is really delve into your own subconscious.
0:13:48 And if I had understood her correctly, I think she was talking about the
0:13:52 capabilities of what types of images and moving images you can create with
0:13:59 generative AI tools at your disposal go well beyond what you can create without
0:14:05 them, and you’re not limited to capturing reality, so to speak.
0:14:09 You can create reality, which people have been able to do with technology for
0:14:14 a while now, but easier, faster, perhaps better.
0:14:18 What do radiance fields do?
0:14:20 What do you think that they are doing and could do for creative applications?
0:14:24 Yeah, I see a very large creative opportunity for radiance fields going
0:14:29 forward.
0:14:30 I think that they allow people to take larger risks or be able to actually…
0:14:36 I actually wouldn’t even call them risks because what you can do with them is if
0:14:41 you get captured, you can film in post.
0:14:44 You don’t actually need to film up front, you just need to capture it.
0:14:48 And I think that that allows a lot more thinking about where you’re not
0:14:53 being constrained for time to think like, what are the actual camera movements
0:14:57 that we want or what’s the best way to actually tell this story?
0:15:00 And you have the ability to align and say, if you are a director, you can go to
0:15:04 your director photography and say, here’s the exact camera movement I’m trying
0:15:07 to convey in this or here’s the exact thing I’m trying to show.
0:15:11 And I think that that’s going to really supercharge productions and in the same
0:15:16 way, I think that it’s also going to allow more stories to be told in places
0:15:22 that we’ve never been before because we can be transported to these places
0:15:28 and be exposed to the natural lifelike interpretation of wherever you want to
0:15:35 take an audience.
0:15:36 And in the same way, I think for students and for independent filmmakers, it
0:15:41 represents such a massive opportunity because you will have the ability to go
0:15:45 to locations and tell stories in locations that you may have always dreamed
0:15:49 of shutting down the Las Vegas Strip, for instance.
0:15:51 But now you’re actually going to be able to do that.
0:15:53 Right, right.
0:15:54 Where are we at in terms of the technology when it comes to the resolution
0:15:59 of images and particularly in the backgrounds of the images?
0:16:03 Are we just constrained by the quality of your camera and the available
0:16:09 compute or is it more intricate than that?
0:16:13 No, I think that where we are right now, it’s fascinating.
0:16:15 There’s a meta-reality labs paper that released late last year called VRNURF
0:16:20 and essentially what they did was they created this camera rig, effectively
0:16:25 known as the Eiffel Tower, where it has 22, I think, Sony A9 cameras
0:16:31 strapped onto it all facing different directions.
0:16:33 And then from that, what they would do is they’d go into a room and then they’d
0:16:36 push it through the room and each camera would take nine bracketed exposure
0:16:41 shots across different exposures.
0:16:44 Then each one of those images would be compiled into a single HDR image
0:16:47 and then that image would be trained in the NURF.
0:16:50 And the resulting image quality of that approaches the IMAX level quality.
0:16:55 And it’s able to be reconstructed.
0:16:57 It’s not a bottleneck in terms of the visual fidelity.
0:17:00 It’s able to handle that.
0:17:02 It’s more of a compute issue right now.
0:17:04 But obviously, as time goes on, we’re going to get more and more efficient
0:17:08 computers as well.
0:17:10 And so it’s more a proof of concept to me than anything is that, you know,
0:17:15 when we get to that level, that’s the floor.
0:17:17 Yeah, right, right, right.
0:17:19 Beyond capturing moments in time for your own use,
0:17:24 are you doing other things yourself with the technology right now?
0:17:27 Yeah, so I’ve been doing some consulting work for businesses
0:17:30 that want to implement the Radiance Field-Based Technology into their offerings.
0:17:34 And I can’t talk about too much about like the work.
0:17:38 But one of the ones I’m really excited to be working on is Shutterstock,
0:17:43 where it’s just, you know, how do we create these assets
0:17:46 that are available for people to actually use today?
0:17:49 Right, amazing.
0:17:50 So kind of out in, you know, in the mainstream world, so to speak, right,
0:17:53 in the pop culture world, I would imagine that we’ve probably seen NURFs in action
0:17:58 and just didn’t realize it or didn’t know how to name it.
0:18:02 Is that off base?
0:18:03 Or are there some examples of things out there that listeners might have seen?
0:18:06 Yeah, there have actually been some pretty high-profile uses of Radiance Field,
0:18:10 where earlier this year, the Phoenix Suns, actually, the entire team was NURFed.
0:18:15 And so there are NURFs of Kevin Durant, Devon Booker, the entire team.
0:18:20 And they actually are using it as part of their introductory video for this season.
0:18:26 But you’re like during the games and starting lineups?
0:18:28 Yes, yes, and it really showcases a way that, you know,
0:18:31 if you as a business can help bring fans closer to the action
0:18:35 because, you know, you have these lifelike interpretations
0:18:37 that are doing the most insane camera movements that, you know.
0:18:40 And so is it like Katie’s going up for a shot
0:18:43 and the camera sort of seems to, and I don’t know if the motion stops or not,
0:18:48 but the camera sort of stops and then tracks sort of around him from a different angle,
0:18:53 like that kind of thing?
0:18:54 Yeah, that’s actually extremely close to one of the examples where he’s about to, you know,
0:18:58 dunk and it kind of flies up around him and circles around,
0:19:01 which would be very difficult to go ahead and create.
0:19:05 But, you know, Riddensfields make that actually very surprisingly easy.
0:19:10 Yeah, amazing.
0:19:11 Yeah, and there’s also been a lot of other high profile use cases within the music industry.
0:19:15 So Zayn Malik, for instance, has a music video for Love Like This,
0:19:20 which I think got something like 10 million views in the first 24 hours,
0:19:24 which is just insane.
0:19:25 There’s a music video for RL Grimes, Pour Your Heart Out,
0:19:29 which is actually comprised of over 700 individual nerfs,
0:19:33 which is just an insane endeavor.
0:19:35 And every single shot in that music video is a nerf.
0:19:39 There’s also Usher for his most recent single, I think it’s called Ruin,
0:19:43 has a few different examples of Gaussian Splatting in it.
0:19:47 And then Polo G has a song that just released called Sorrys and Ferraris
0:19:51 that also are using Gaussian Splatting.
0:19:54 And then there’s also Chris Brown has one as well.
0:19:58 And J. Cole and Drake put out a music video, I think,
0:20:02 within the last week or two.
0:20:03 And there is a nerf hidden in there.
0:20:05 Nice.
0:20:05 And I don’t know if this counts or not, but in Jensen’s most recent keynote,
0:20:10 there actually is a nerf in the background and one of his slides.
0:20:14 All right, should we make it a contest or listeners to spot it
0:20:17 or do you want to call it out so we can go look?
0:20:20 If you guys want to, or if you guys want to pause it
0:20:23 and see if you guys can spot it from the two hours.
0:20:27 This is great.
0:20:28 Everybody, you can go queue up your YouTube playlist
0:20:30 with all the music videos you just mentioned.
0:20:32 Yes.
0:20:33 And then, you know, rewind the podcast, listen back,
0:20:36 you’re spotting the nerfs.
0:20:37 And then when you’re done with the pod, go back, rewatch Jensen’s keynote
0:20:40 and see if you can find it.
0:20:41 Yes, but it’s in this slide where I think he’s taking a look
0:20:44 at the different modalities in which Nvidia operates.
0:20:46 And it’s under the 3D tab and it’s like a coastal cliff view.
0:20:51 And it was actually taken by one of my good friends, Jonathan Stevens.
0:20:54 And so it was really cool.
0:20:55 That was the first time, I think,
0:20:57 that we’ve seen nerfs being featured in the keynote.
0:20:59 Right, excellent.
0:21:00 Shout out to Jonathan.
0:21:02 Michael, closing thoughts, nerfs, radiance fields, 3D Gaussian splatting.
0:21:08 For the listener who, you know, never heard of this stuff before.
0:21:12 Listen to our conversation, you know, ring some bells,
0:21:15 spark some ideas in their head.
0:21:17 They’re thinking about going out and, you know, exploring some
0:21:20 of the music videos we talked about.
0:21:22 Where do you think this is going to go over the next,
0:21:25 whatever the time period is, couple of years, 10 years, generations?
0:21:28 Are all of our photos going to become 3D multi-perspective,
0:21:33 you know, sort of models going forward?
0:21:38 Are we forever living in a world of 2D images?
0:21:42 And then we figure out ways to make them more like 3D models.
0:21:46 Where’s the future of imaging and sort of post-processing headed?
0:21:50 Yeah, it’s a great question.
0:21:51 And, you know, my opinion is that we now have the ability to no longer be constrained
0:21:56 to 2D, and, you know, 2D is not actually how we experience our lives.
0:22:01 And it’s, to me, I feel like, you know, it should not be the final frontier of imaging.
0:22:06 And now that we have the technology to do so, I think it’s really time to begin
0:22:12 exploring how we can actually document, you know, our lives in a life-like way.
0:22:17 It’s the way that we actually experience life.
0:22:18 Because not only can you create, you know, static nerfs or Gaussian splatting files,
0:22:23 but you can also create dynamic versions of them, too.
0:22:27 And so if you can imagine, like, you know, an analogy of, you know, static nerfs to photos,
0:22:32 you can also do the same with videos.
0:22:34 And so I think that, you know, we’re really entering into an age where imaging is not
0:22:41 the same as it has been for the last, since the inception of photography.
0:22:46 You know, obviously, it progressed significantly, but I think that now the technology is there
0:22:50 where we can just take a fundamental leap forwards into an entirely new dimension.
0:22:54 Come to GTC and you leave in a new dimension, that’s how it works.
0:22:59 Michael Rubloff, thank you so much for stopping by the podcast.
0:23:02 Your website, again, is called radiancefields.com.
0:23:06 Are there other places for people who want to follow your work, learn more about the space,
0:23:10 you direct them to go, other websites, social media accounts, anything?
0:23:13 Yeah.
0:23:14 All my social media handles are just @radiancefields.
0:23:17 And so that’s just generally LinkedIn, Twitter are the two big ones that I mainly post on.
0:23:24 But yeah, I would just encourage all listeners just to try downloading some of the platforms
0:23:30 themselves and some of the really good ones to get started.
0:23:33 You can take a look at Luma AI, Polycam.
0:23:36 If you’re on Windows, you can download PostShot or Nerf Studio.
0:23:39 And you know, they’re all free right now.
0:23:42 And it’s not as bad as you would imagine to actually capture everything.
0:23:45 It’s actually quite straightforward and is pretty forgiving.
0:23:48 So yeah, give it a try.
0:23:49 If you can take a picture, you can make a nerf.
0:23:52 Exactly.
0:23:53 Excellent.
0:23:54 Thank you again.
0:23:55 A pleasure talking to you.
0:23:56 Thank you so much.
0:23:56 Thank you.
0:24:03
0:24:10
0:24:17 Thank you.
0:24:26 Thank you.
0:24:45 .
0:24:46 [BLANK_AUDIO]
0:00:10 >> Hello, and welcome to the NVIDIA AI podcast.
0:00:13 I’m your host, Noah Kravitz.
0:00:15 We’re coming to you from GTC 2024 in San Jose, California,
0:00:20 and we’re here to talk about nerfs.
0:00:22 No, not foam footballs and dart guns,
0:00:24 but neural radiance fields.
0:00:27 What is this kind of nerf?
0:00:28 It’s a technology that might just be changing the nature of images forever.
0:00:32 Here to explain more is Michael Rubloff.
0:00:35 Michael is the founder and managing editor of radiancefields.com,
0:00:39 a news site covering the progression of radiance field based technologies,
0:00:42 including neural radiance fields, aka nerfs,
0:00:45 and something called 3D Gaussian Splatting that I’ll leave to Michael to explain.
0:00:50 Michael, thanks so much for taking time out of GTC to join the AI podcast.
0:00:54 >> Of course, thank you so much for having me.
0:00:55 So first things first, goofy football jokes aside, what is a nerf?
0:01:00 What does that mean?
0:01:01 >> Yeah, so essentially, you can think of a nerf as they allow you to take a series
0:01:06 of 2D images or video, and what you can do from that is you can actually
0:01:11 create a hyper realistic 3D model.
0:01:13 And what that allows for is once you have it created, it’s like a photograph,
0:01:17 but it’s perfect from any imaginable angle.
0:01:20 Your composition is no longer a bottleneck.
0:01:22 You can do whatever it is that you would like with that file and it will look lifelike.
0:01:27 >> So if I were to take, I don’t know how many, two, three,
0:01:32 five pictures of the two of us sitting here right now in this podcast room,
0:01:37 if you will, I could put those together into a nerf.
0:01:42 And then I would have a file that I can look at from different perspectives.
0:01:48 I can sort of move through.
0:01:50 How does that work from the user experience side?
0:01:53 >> Yeah, so typically, the recommended amount is somewhere between like,
0:01:57 I’d say, 40 to 100 images.
0:01:59 It’s really easy from a video because then you can just slice up
0:02:03 individual frames from that video.
0:02:05 But there are some methods actually that are going all the way down to three
0:02:09 images and it’s actually able to reconstruct, which is just mind blowing.
0:02:13 >> I’m too used to few shot and zero shot learning and things like that.
0:02:18 So I’m like, one image.
0:02:19 >> Yes, they are getting there.
0:02:20 And there’s been a ton of amazing work, one called Reconfusion from Google,
0:02:25 which is just shocking.
0:02:26 It can go down as far as three images and it’s still very compelling.
0:02:31 But yeah, so once you actually have gone ahead and taken your images or video,
0:02:36 you would run it through a Radiance Field Pipeline,
0:02:38 whether that’s a nerf-based one or a Gaussian-spotting-based one.
0:02:41 There are several cloud-based options where it’s just drag and drop your images
0:02:45 and it does all the work for you.
0:02:47 And the resulting image or resulting file, yeah, you have autonomy over it.
0:02:52 And you can kind of experience that whatever you’ve captured from whatever
0:02:57 angle that you would like or whatever your use case might be for it.
0:03:01 >> How are they created without getting too technical about it?
0:03:07 Can you kind of give an overview of kind of what’s going on behind the scenes to
0:03:10 put these together?
0:03:11 >> Sure, so once you have your initial images,
0:03:14 the first step on both nerfs and Gaussian-splatting is running it through
0:03:18 something called structure from motion,
0:03:19 where essentially you’re taking all the images and kind of aligning them in a
0:03:23 space with one another.
0:03:24 So that’s kind of taking a look, saying like if image X is over here and
0:03:28 image Y is over here, here’s how they overlap and converge with one another.
0:03:31 And so that’s kind of the baseline approach for both methods.
0:03:34 And from there, they each have their own training methods where nerfs have
0:03:39 a neural network involved in the training of them,
0:03:41 whereas Gaussian-splatting uses a rasterization.
0:03:44 >> Okay, and I guess that kind of begs the question of beyond what you just said or
0:03:48 maybe that’s it.
0:03:48 But what’s the difference between a nerf and a Gaussian-splatting?
0:03:52 >> Yeah, so nerfs essentially were created first.
0:03:55 They were found as I joined effort between the University of Berkeley and Google.
0:04:00 >> Okay.
0:04:00 >> So nerfs have a implicit representation for them and
0:04:06 they’re trained through a neural network.
0:04:07 So there’s a lot of work being done to get higher and
0:04:11 higher frame rates associated with that.
0:04:13 Whereas Gaussian-splatting uses just direct rasterization.
0:04:16 And so you’re able to have a much more efficient rendering pipeline where
0:04:21 you can really easily get 100 plus FPS and
0:04:25 you can use them with a lot of different methods as well.
0:04:27 So they’re very compatible with 3JS and React 3 Fiber where you can see them
0:04:33 being used in website design now and being on platforms like Spline.
0:04:36 >> Very cool, and so that kind of gets into the next question,
0:04:39 which is how are they being used now?
0:04:40 I saw some examples on, I don’t know if it was the NVIDIA developer blog.
0:04:45 And they used a kind of an obscure song that my wife and I like a lot.
0:04:50 So it was perfect, right?
0:04:51 But it was a nerf, I believe, of a couple walking down memory lane.
0:04:56 They were walking outside in the foliage around them.
0:05:00 And you were able to, I was able to sort of zoom around from different points of
0:05:03 view, see their front, see their back, look at the trees, that kind of thing.
0:05:07 But beyond sort of a demo seen like that,
0:05:10 how are nerfs being used out in the world?
0:05:11 >> Yeah, so that specific one actually is of my parents that I took.
0:05:14 >> Oh, it’s your parents, that’s very cool.
0:05:16 >> And so I took, because that’s one of the major use cases for me.
0:05:18 I want to be able to document my life not only in two dimensions, but
0:05:22 I want to have a hyper realistic three dimensional moment in time frozen.
0:05:26 And so for me, that’s one of the personal use cases.
0:05:29 But on a more commercial basis, where you’re seeing a lot of the early adoption
0:05:34 is in the media and entertainment world as well as the gaming world too.
0:05:39 So for instance, Shutterstock has been putting together a library
0:05:43 of Radiance Fields where essentially what you’re able to do.
0:05:47 So say, hypothetically, you are wanting to film a Grand Central Terminal.
0:05:52 But you cannot afford to shut down all the traffic and all the trains and
0:05:55 all the foot traffic through that to film.
0:05:58 What you can do is using a Radiance Field.
0:06:01 If you capture it once, you’re able to then bring that file into Unreal Engine and
0:06:06 into a virtual production environment.
0:06:08 And then you can film infinitely.
0:06:10 And there’s no more rush outside of the actual rental rate.
0:06:13 And you can go and get the shot that you actually need.
0:06:16 And that’s where it’s starting to get adopted pretty early on.
0:06:20 And similarly, in the gaming side of things through generative Radiance
0:06:24 Fields because you are able to create these from text and images.
0:06:28 And newly video as well.
0:06:31 Now you’re able to drop these assets that take, I think the fastest method
0:06:35 currently takes about half a second to create a full 3D model.
0:06:39 And you can put that straight into one of the game engines and.
0:06:43 >> Right, and so let me sort of play this back and
0:06:46 see if I’m grasping it correctly.
0:06:48 So if I were to go to Grand Central and do my very short
0:06:53 relatively speed and my very quick shoot.
0:06:56 And come away with enough images to create a nerf.
0:07:00 I would then be able in a virtual production environment to kind of
0:07:05 create scenes or put elements into scenes from all of these different points of
0:07:09 view, not just from a single perspective.
0:07:11 Is that kind of the big?
0:07:13 >> Yes, yeah, that’s correct.
0:07:14 Where essentially you’re able to sync up the nerf to the actual camera.
0:07:17 And then you can use the full virtual production pipeline to go ahead and create.
0:07:23 >> Wow, this is something I probably should have asked you at the top.
0:07:25 So listeners, forgive me for not having a more scientific inquiry kind of way of
0:07:31 organizing my own thoughts.
0:07:32 But a radiance field, what does that term mean?
0:07:36 >> That’s a great question.
0:07:38 So essentially, you could think about a radiance field is a,
0:07:43 well, if you just break it down into the two simple words.
0:07:46 The radiance is just what that individual color would look like based
0:07:52 upon your viewing direction.
0:07:53 So say if you’re looking at like a, say a glass or something and
0:07:57 you can see that there’s an actual reflection there.
0:08:00 Depending on how you look at it and what radiance fields offer is
0:08:03 something called view dependent effects.
0:08:05 So as you move your head around, just as you do in real life, light changes,
0:08:09 light shifts and reflects.
0:08:10 And just like that, radiance fields are able to model that effect.
0:08:14 So you could think of radiance fields as the actual shift in colors at a given
0:08:17 space.
0:08:18 So knowing that that radiance at a specific point could take a look at that as
0:08:22 being radiance, whereas it’s being contained inside of a field.
0:08:25 >> Got it, okay.
0:08:26 So you said something a minute ago about generative AI,
0:08:30 using generative AI to create radiance fields.
0:08:33 And you may have used a term that slipped my mind, forgive me.
0:08:36 Is it the same basic principle as doing a text to image,
0:08:42 using a text to image model, chat GPT or dolly, stable diffusion, whatever it is?
0:08:47 Is it that same principle that I enter a text prompt and
0:08:50 then the system can create a radiance field?
0:08:53 Or is it creating a series of images?
0:08:55 Or can you get more, kind of more control than that over it?
0:08:59 >> Yeah, so exactly.
0:09:01 What it does is it will create a series of images of a singular object.
0:09:06 And actually, in some cases, they’re starting to release some papers where
0:09:09 they’re creating multiple objects.
0:09:11 But each object itself is either a nerf or, say, a Gaussian-spotting file.
0:09:16 And from that, that’s what’s actually being used to train the actual resulting
0:09:20 3D model, but they’re able to do that in a fraction of a second.
0:09:23 >> Sure.
0:09:25 So earlier this week, you hosted a session at GTC.
0:09:29 And I, unfortunately, wasn’t able to make it.
0:09:31 I wasn’t in town just yet.
0:09:33 We were talking about it before we hit record.
0:09:35 And you spoke to, if I’ve got this right, some of the artistic implications,
0:09:39 possibilities around using nerfs.
0:09:42 And then also some more kind of business enterprise oriented applications.
0:09:47 How did that go?
0:09:48 What kinds of things did you talk about?
0:09:49 And then I’m kind of curious what the audience reaction was.
0:09:52 Either that or kind of more generally, as people learn about
0:09:58 nerfs and Gaussian-splatting, what kind of the reaction is?
0:10:02 And does it spark imagination?
0:10:04 And sort of what are some of the implications?
0:10:06 >> Yeah, I was surprised by how many people actually came out to attend.
0:10:11 And so it seemed like there was an extreme amount of interest in terms of
0:10:15 just like visualizing itself.
0:10:17 So I had a roughly 20-minute video of just looping different examples of
0:10:23 Radiance Fields that I’ve created and some of the people in the community have as well.
0:10:27 And so I think there was a lot of interest across a wide variety of industries,
0:10:31 where I spoke to professors, I spoke to people working on offshore drilling sites.
0:10:36 I spoke to physicians, people of really diverse backgrounds and use cases.
0:10:41 But I think all of them will be affected by Radiance Fields.
0:10:44 >> And is the interest in some of those, because I want to ask you about sort of
0:10:48 artistic creative implications as well, but we’ll put a pin in that for a second.
0:10:52 Is the interest from, say, a physician or the offshore drilling site makes me think
0:10:57 of use cases of robots and drones to be able to go to places more safely
0:11:04 than sending a human to inspect something?
0:11:06 And is it along those lines of being able to create a Radiance Field and
0:11:12 then from a, quote, safe environment be able to inspect different aspects
0:11:17 of the offshore site from different angles?
0:11:20 Is it that kind of thing, or is it something totally different?
0:11:22 >> Yeah, no, it actually is quite similar where if you have a predetermined camera
0:11:27 path or you give the necessary information to the model,
0:11:32 it will be able to create a hyper realistic view of what it sees.
0:11:36 And from that, you can then flag for a human if they need to go and
0:11:41 take a visit for actual maintenance or repairs.
0:11:44 And so you’re able to really give a hyper realistic look for
0:11:48 that specific use case for asynchronous maintenance.
0:11:51 >> Right, right.
0:11:53 Are there implications with VR and extended reality and augmented reality?
0:11:57 >> Yes, yes, and so that was actually one of the demos that we were showing.
0:12:00 It’s just like VR applications because they still retain their
0:12:06 view dependent effects when you’re in VR.
0:12:08 And so as you move around a scene, we as humans expect light to behave in a certain
0:12:13 way, and with this, that continues to hold true.
0:12:17 And with Radiance Field Zero, you can actually walk through the entire scene.
0:12:22 And so it is, I think, the closest thing to actually stepping back into
0:12:26 a moment in time that we have.
0:12:27 >> Yeah, amazing.
0:12:29 I’m speaking with Michael Rubloff.
0:12:30 Michael is the founder and managing editor of RadianceFields.com,
0:12:34 a website that’s covering the progression of Radiance Fields based technologies.
0:12:38 And we’ve been talking about them about Neural Radiance Fields, NURFs, and
0:12:42 3D Gaussian Splatting, these techniques that allow us to stitch together 2D images
0:12:46 and create a hyper realistic 3D model, 3D environment that we can do all these
0:12:52 different things with.
0:12:53 I mentioned, wanted to ask you about some of the artistic implications.
0:12:56 And this might be a weird leaping off point, so redirect me if it is.
0:12:59 But recently, we recorded a podcast with a woman from a company called Cubrix.
0:13:06 And I’m going to get this wrong, forgive me.
0:13:07 But it’s basically kind of like a digital sound stage, sort of an advanced
0:13:13 digital green screen type thing that you can use in filmmaking,
0:13:17 video making, and powered by generative AI, kind of similar things.
0:13:22 And I remember asking her, what advice do you have for burgeoning filmmakers who
0:13:29 are interested in creating but wondering how to go about it in this age with all
0:13:34 these AI tools now becoming available and advancing so quickly?
0:13:38 And her answer was not what I expected, but it was really interesting.
0:13:40 She said, well, the first thing you should do when you’re thinking about using
0:13:44 generative AI in filmmaking is really delve into your own subconscious.
0:13:48 And if I had understood her correctly, I think she was talking about the
0:13:52 capabilities of what types of images and moving images you can create with
0:13:59 generative AI tools at your disposal go well beyond what you can create without
0:14:05 them, and you’re not limited to capturing reality, so to speak.
0:14:09 You can create reality, which people have been able to do with technology for
0:14:14 a while now, but easier, faster, perhaps better.
0:14:18 What do radiance fields do?
0:14:20 What do you think that they are doing and could do for creative applications?
0:14:24 Yeah, I see a very large creative opportunity for radiance fields going
0:14:29 forward.
0:14:30 I think that they allow people to take larger risks or be able to actually…
0:14:36 I actually wouldn’t even call them risks because what you can do with them is if
0:14:41 you get captured, you can film in post.
0:14:44 You don’t actually need to film up front, you just need to capture it.
0:14:48 And I think that that allows a lot more thinking about where you’re not
0:14:53 being constrained for time to think like, what are the actual camera movements
0:14:57 that we want or what’s the best way to actually tell this story?
0:15:00 And you have the ability to align and say, if you are a director, you can go to
0:15:04 your director photography and say, here’s the exact camera movement I’m trying
0:15:07 to convey in this or here’s the exact thing I’m trying to show.
0:15:11 And I think that that’s going to really supercharge productions and in the same
0:15:16 way, I think that it’s also going to allow more stories to be told in places
0:15:22 that we’ve never been before because we can be transported to these places
0:15:28 and be exposed to the natural lifelike interpretation of wherever you want to
0:15:35 take an audience.
0:15:36 And in the same way, I think for students and for independent filmmakers, it
0:15:41 represents such a massive opportunity because you will have the ability to go
0:15:45 to locations and tell stories in locations that you may have always dreamed
0:15:49 of shutting down the Las Vegas Strip, for instance.
0:15:51 But now you’re actually going to be able to do that.
0:15:53 Right, right.
0:15:54 Where are we at in terms of the technology when it comes to the resolution
0:15:59 of images and particularly in the backgrounds of the images?
0:16:03 Are we just constrained by the quality of your camera and the available
0:16:09 compute or is it more intricate than that?
0:16:13 No, I think that where we are right now, it’s fascinating.
0:16:15 There’s a meta-reality labs paper that released late last year called VRNURF
0:16:20 and essentially what they did was they created this camera rig, effectively
0:16:25 known as the Eiffel Tower, where it has 22, I think, Sony A9 cameras
0:16:31 strapped onto it all facing different directions.
0:16:33 And then from that, what they would do is they’d go into a room and then they’d
0:16:36 push it through the room and each camera would take nine bracketed exposure
0:16:41 shots across different exposures.
0:16:44 Then each one of those images would be compiled into a single HDR image
0:16:47 and then that image would be trained in the NURF.
0:16:50 And the resulting image quality of that approaches the IMAX level quality.
0:16:55 And it’s able to be reconstructed.
0:16:57 It’s not a bottleneck in terms of the visual fidelity.
0:17:00 It’s able to handle that.
0:17:02 It’s more of a compute issue right now.
0:17:04 But obviously, as time goes on, we’re going to get more and more efficient
0:17:08 computers as well.
0:17:10 And so it’s more a proof of concept to me than anything is that, you know,
0:17:15 when we get to that level, that’s the floor.
0:17:17 Yeah, right, right, right.
0:17:19 Beyond capturing moments in time for your own use,
0:17:24 are you doing other things yourself with the technology right now?
0:17:27 Yeah, so I’ve been doing some consulting work for businesses
0:17:30 that want to implement the Radiance Field-Based Technology into their offerings.
0:17:34 And I can’t talk about too much about like the work.
0:17:38 But one of the ones I’m really excited to be working on is Shutterstock,
0:17:43 where it’s just, you know, how do we create these assets
0:17:46 that are available for people to actually use today?
0:17:49 Right, amazing.
0:17:50 So kind of out in, you know, in the mainstream world, so to speak, right,
0:17:53 in the pop culture world, I would imagine that we’ve probably seen NURFs in action
0:17:58 and just didn’t realize it or didn’t know how to name it.
0:18:02 Is that off base?
0:18:03 Or are there some examples of things out there that listeners might have seen?
0:18:06 Yeah, there have actually been some pretty high-profile uses of Radiance Field,
0:18:10 where earlier this year, the Phoenix Suns, actually, the entire team was NURFed.
0:18:15 And so there are NURFs of Kevin Durant, Devon Booker, the entire team.
0:18:20 And they actually are using it as part of their introductory video for this season.
0:18:26 But you’re like during the games and starting lineups?
0:18:28 Yes, yes, and it really showcases a way that, you know,
0:18:31 if you as a business can help bring fans closer to the action
0:18:35 because, you know, you have these lifelike interpretations
0:18:37 that are doing the most insane camera movements that, you know.
0:18:40 And so is it like Katie’s going up for a shot
0:18:43 and the camera sort of seems to, and I don’t know if the motion stops or not,
0:18:48 but the camera sort of stops and then tracks sort of around him from a different angle,
0:18:53 like that kind of thing?
0:18:54 Yeah, that’s actually extremely close to one of the examples where he’s about to, you know,
0:18:58 dunk and it kind of flies up around him and circles around,
0:19:01 which would be very difficult to go ahead and create.
0:19:05 But, you know, Riddensfields make that actually very surprisingly easy.
0:19:10 Yeah, amazing.
0:19:11 Yeah, and there’s also been a lot of other high profile use cases within the music industry.
0:19:15 So Zayn Malik, for instance, has a music video for Love Like This,
0:19:20 which I think got something like 10 million views in the first 24 hours,
0:19:24 which is just insane.
0:19:25 There’s a music video for RL Grimes, Pour Your Heart Out,
0:19:29 which is actually comprised of over 700 individual nerfs,
0:19:33 which is just an insane endeavor.
0:19:35 And every single shot in that music video is a nerf.
0:19:39 There’s also Usher for his most recent single, I think it’s called Ruin,
0:19:43 has a few different examples of Gaussian Splatting in it.
0:19:47 And then Polo G has a song that just released called Sorrys and Ferraris
0:19:51 that also are using Gaussian Splatting.
0:19:54 And then there’s also Chris Brown has one as well.
0:19:58 And J. Cole and Drake put out a music video, I think,
0:20:02 within the last week or two.
0:20:03 And there is a nerf hidden in there.
0:20:05 Nice.
0:20:05 And I don’t know if this counts or not, but in Jensen’s most recent keynote,
0:20:10 there actually is a nerf in the background and one of his slides.
0:20:14 All right, should we make it a contest or listeners to spot it
0:20:17 or do you want to call it out so we can go look?
0:20:20 If you guys want to, or if you guys want to pause it
0:20:23 and see if you guys can spot it from the two hours.
0:20:27 This is great.
0:20:28 Everybody, you can go queue up your YouTube playlist
0:20:30 with all the music videos you just mentioned.
0:20:32 Yes.
0:20:33 And then, you know, rewind the podcast, listen back,
0:20:36 you’re spotting the nerfs.
0:20:37 And then when you’re done with the pod, go back, rewatch Jensen’s keynote
0:20:40 and see if you can find it.
0:20:41 Yes, but it’s in this slide where I think he’s taking a look
0:20:44 at the different modalities in which Nvidia operates.
0:20:46 And it’s under the 3D tab and it’s like a coastal cliff view.
0:20:51 And it was actually taken by one of my good friends, Jonathan Stevens.
0:20:54 And so it was really cool.
0:20:55 That was the first time, I think,
0:20:57 that we’ve seen nerfs being featured in the keynote.
0:20:59 Right, excellent.
0:21:00 Shout out to Jonathan.
0:21:02 Michael, closing thoughts, nerfs, radiance fields, 3D Gaussian splatting.
0:21:08 For the listener who, you know, never heard of this stuff before.
0:21:12 Listen to our conversation, you know, ring some bells,
0:21:15 spark some ideas in their head.
0:21:17 They’re thinking about going out and, you know, exploring some
0:21:20 of the music videos we talked about.
0:21:22 Where do you think this is going to go over the next,
0:21:25 whatever the time period is, couple of years, 10 years, generations?
0:21:28 Are all of our photos going to become 3D multi-perspective,
0:21:33 you know, sort of models going forward?
0:21:38 Are we forever living in a world of 2D images?
0:21:42 And then we figure out ways to make them more like 3D models.
0:21:46 Where’s the future of imaging and sort of post-processing headed?
0:21:50 Yeah, it’s a great question.
0:21:51 And, you know, my opinion is that we now have the ability to no longer be constrained
0:21:56 to 2D, and, you know, 2D is not actually how we experience our lives.
0:22:01 And it’s, to me, I feel like, you know, it should not be the final frontier of imaging.
0:22:06 And now that we have the technology to do so, I think it’s really time to begin
0:22:12 exploring how we can actually document, you know, our lives in a life-like way.
0:22:17 It’s the way that we actually experience life.
0:22:18 Because not only can you create, you know, static nerfs or Gaussian splatting files,
0:22:23 but you can also create dynamic versions of them, too.
0:22:27 And so if you can imagine, like, you know, an analogy of, you know, static nerfs to photos,
0:22:32 you can also do the same with videos.
0:22:34 And so I think that, you know, we’re really entering into an age where imaging is not
0:22:41 the same as it has been for the last, since the inception of photography.
0:22:46 You know, obviously, it progressed significantly, but I think that now the technology is there
0:22:50 where we can just take a fundamental leap forwards into an entirely new dimension.
0:22:54 Come to GTC and you leave in a new dimension, that’s how it works.
0:22:59 Michael Rubloff, thank you so much for stopping by the podcast.
0:23:02 Your website, again, is called radiancefields.com.
0:23:06 Are there other places for people who want to follow your work, learn more about the space,
0:23:10 you direct them to go, other websites, social media accounts, anything?
0:23:13 Yeah.
0:23:14 All my social media handles are just @radiancefields.
0:23:17 And so that’s just generally LinkedIn, Twitter are the two big ones that I mainly post on.
0:23:24 But yeah, I would just encourage all listeners just to try downloading some of the platforms
0:23:30 themselves and some of the really good ones to get started.
0:23:33 You can take a look at Luma AI, Polycam.
0:23:36 If you’re on Windows, you can download PostShot or Nerf Studio.
0:23:39 And you know, they’re all free right now.
0:23:42 And it’s not as bad as you would imagine to actually capture everything.
0:23:45 It’s actually quite straightforward and is pretty forgiving.
0:23:48 So yeah, give it a try.
0:23:49 If you can take a picture, you can make a nerf.
0:23:52 Exactly.
0:23:53 Excellent.
0:23:54 Thank you again.
0:23:55 A pleasure talking to you.
0:23:56 Thank you so much.
0:23:56 Thank you.
0:24:03
0:24:10
0:24:17 Thank you.
0:24:26 Thank you.
0:24:45 .
0:24:46 [BLANK_AUDIO]
Let’s talk about NeRFs — no, not the neon-colored foam dart blasters, but neural radiance fields, a technology that might just change the nature of images forever. In this episode of NVIDIA’s AI Podcast recorded live at GTC, host Noah Kravitz speaks with Michael Rubloff, founder and managing editor of radiancefields.com, about radiance field-based technologies. NeRFs allow users to take a series of 2D images or video to create a hyperrealistic 3D model — something like a photograph of a scene, but that can be looked at from multiple angles. Tune in to learn more about the technology’s creative and commercial applications and how it might transform the way people capture and experience the world.