AI transcript
0:00:06 That space, the 3D space, the space out there, the space in your mind’s eye,
0:00:12 the spatial intelligence that enables people to do so many things that’s beyond language
0:00:14 is a critical part of intelligence.
0:00:17 Vivi leans over to me, she’s like, you know what we’re missing?
0:00:19 I said, what are we missing? She said, we’re missing a world model.
0:00:21 I’m like, yes!
0:00:24 We can actually create infinite universes.
0:00:29 Some are for robots, some are for creativity, some are for socialization,
0:00:32 some are for travel, some are for storytelling.
0:00:36 It suddenly will enable us to live in a multiverse way.
0:00:38 The imagination is boundless.
0:00:43 When we talk about AI today, the conversation is dominated by language,
0:00:45 LLMs, tokens, prompts.
0:00:48 But what if we’re missing something more fundamental?
0:00:49 Not words, but space.
0:00:52 The physical world we move through and shape.
0:00:54 My guests today think we are.
0:00:58 Fei Fei Li, a pioneer in modern AI, helped usher in the deep learning era
0:01:00 by putting data at the center of machine learning.
0:01:04 Now she’s co-founder and CEO of World Labs, Building World Models,
0:01:08 AI systems that perceive and act in 3D space.
0:01:11 She’s joined by A16Z general partner, Martin Casado,
0:01:14 computer scientist, repeat founder,
0:01:17 and one of the first people Fei Fei called when forming the company.
0:01:22 Today, they explain why spatial intelligence is core to general intelligence
0:01:24 and why it’s time to go beyond language.
0:01:26 Let’s get into it.
0:01:31 As a reminder, the content here is for informational purposes only.
0:01:34 Should not be taken as legal, business, tax, or investment advice,
0:01:37 or be used to evaluate any investment or security,
0:01:41 and is not directed at any investors or potential investors in any A16Z fund.
0:01:45 Please note that A16Z and its affiliates may also maintain investments
0:01:46 in the companies discussed in this podcast.
0:01:49 For more details, including a link to our investments,
0:01:54 please see A16Z.com forward slash disclosures.
0:02:00 Feifei, thank you so much for joining us here today.
0:02:03 Martin, why don’t you briefly brag on behalf of Feifei a little bit,
0:02:05 and how would you summarize your contributions to AI for people unfamiliar?
0:02:08 Yeah, someone that doesn’t need a lot of introduction,
0:02:10 and she’s done so many things that I can’t fill in,
0:02:12 so maybe I’ll just do the ones that are appropriate to this.
0:02:13 Of course, she was on the Twitter board.
0:02:14 She was a Google architect.
0:02:16 Founder and CEO of World Labs.
0:02:18 But very, very importantly, like, we all know AI,
0:02:20 and we all talk about kind of neural networks,
0:02:23 and there’s a number of people that focused on making those effective.
0:02:26 But Feifei really singularly brought in data to the equation,
0:02:29 which now we’re recognizing is actually probably the bigger problem,
0:02:29 the more interesting one.
0:02:33 And so she truly is the godmother of AI, as everybody calls her.
0:02:35 And Feifei, why did you have to have Martina
0:02:36 as the first investor?
0:02:39 Well, first of all, I knew Martina for more than a decade.
0:02:40 A long time.
0:02:45 You know, I joined Stanford in 2009 as a young assistant professor,
0:02:47 and Martina was finishing his PhD there.
0:02:51 So I always know, and of course, Martina’s advisor,
0:02:52 Nick McCune, was a good friend,
0:02:55 and I always know Martina went on to become
0:02:58 a very successful entrepreneur and very successful investor.
0:03:01 So we see each other, we talk about things.
0:03:04 But as I was formulating the idea of World Labs,
0:03:08 I was looking for what I would call my unicorn investor.
0:03:12 I don’t know if that’s a word, but that’s how I think about this.
0:03:18 Who is not only, obviously, a very established and successful investor
0:03:23 who can be with entrepreneurs on this journey through the ups and downs,
0:03:27 who can be very insightful, who can bring the kind of knowledge, advice, resource.
0:03:32 But I was also particularly looking for an intellectual partner.
0:03:36 Because what we are doing at World Labs is very deep tech.
0:03:40 We are trying to do something no one else has done.
0:03:45 We know with a lot of conviction it will change the world, literally.
0:03:49 But I need someone who is a computer scientist,
0:03:55 who is a student of AI, understand product, market, customers, go to market,
0:03:59 and just can be on the phone or in person with me
0:04:01 every moment of the day as an intellectual partner.
0:04:03 And here we are.
0:04:06 We talk almost every single day.
0:04:07 It is true.
0:04:07 It is true.
0:04:08 Amazing.
0:04:12 It’s actually the origin story of us first connecting is actually pretty interesting.
0:04:15 So Fevi has clearly been thinking about this idea for a very long time,
0:04:16 like well before it started.
0:04:17 So maybe years even.
0:04:20 And she has this very deep intuition of what AI needs
0:04:22 in order to basically navigate the world, right?
0:04:24 But we were at one of Mark’s fancy lunches
0:04:26 and there’s a bunch of AI people
0:04:29 and everybody was so excited about LLMs, right?
0:04:30 And it was talking about language.
0:04:32 And I’d come to this independent conclusion
0:04:34 just because I’ve actually done a lot of image investing
0:04:36 that like that wasn’t the end of the story.
0:04:38 And so Fevi, we’re in the end of this table,
0:04:39 all these people talking about it.
0:04:40 And Fevi leans over to me.
0:04:41 She’s like, you know what we’re missing?
0:04:42 I said, what are we missing?
0:04:44 She said, we’re missing a world model.
0:04:45 And I’m like, yes!
0:04:47 And it fell into place then
0:04:49 because I’d been like thinking about stuff at a high level.
0:04:51 But as she does, she just kind of perfectly articulated this.
0:04:53 And she had a year’s worth of thinking about this
0:04:54 and talked to people, et cetera.
0:04:57 And so in some way, we kind of in our own crooked past
0:04:59 had arrived at a very similar intuition.
0:05:01 Hers was like way more filled out.
0:05:03 Mine was just this kind of fancy thing.
0:05:05 But then after that, we actually had a number of conversations
0:05:08 where we both agreed that we were aligned on this kind of idea.
0:05:11 Actually, I don’t know if you know this.
0:05:13 So of course, during that lunch,
0:05:15 we hit it off on this world model idea.
0:05:18 But I was at that point already talking to various people,
0:05:21 not just computer scientists, technologists,
0:05:24 but also investors, potentially business partners.
0:05:27 And to be honest, most people didn’t get it.
0:05:29 You know, when I say world model, they nod.
0:05:32 But I can just tell that was just a polite nod.
0:05:34 So I called Martin.
0:05:37 I’m like, do you mind coming over to Stanford campus
0:05:38 and have coffee with me?
0:05:39 Coffee cafe.
0:05:40 Yeah, coffee cafe.
0:05:43 And then as soon as Martin came and sat down and said,
0:05:47 Martin, can you define your world model to me?
0:05:51 I really wanted to hear if Martin actually meant it.
0:05:55 And the way he defined it about an AI model
0:05:59 that truly understands the 3D structure, shape,
0:06:03 and the compositionality of the world was exactly what I was talking about.
0:06:07 And I was like, wow, he’s the only person so far I’ve talked to
0:06:09 who actually meant it.
0:06:10 It’s not just nodding.
0:06:10 Wow.
0:06:13 Okay, so we’re going to get to World Labs and the specifics of this.
0:06:15 But first, I want to take you back both to your PhD days,
0:06:20 your professor days, and reflect on if you could go back in time
0:06:23 and sort of have knowledge of what’s happened the preceding 10 years in AI,
0:06:25 what do you think would have been the biggest surprises
0:06:27 or what’s the thing that you didn’t see coming
0:06:29 that would have shocked your younger self?
0:06:31 Or did you have a good sense of how this feels going to play out?
0:06:35 Yeah, it’s ironic to say because, as Martin said,
0:06:38 I was the person who brought data into the AI world,
0:06:43 but I still continue to be so surprised.
0:06:48 Not surprised intellectually, but surprised emotionally
0:06:53 that the data-hungry models, the data-driven AI
0:06:59 can come this far and genuinely have incredible emergent behaviors
0:07:01 of thinking machine, right?
0:07:02 Yeah, let’s get into the specifics.
0:07:04 Why start another foundation model company?
0:07:06 Why aren’t LLMs enough?
0:07:10 My intellectual journey is not about company or papers.
0:07:13 It’s about finding the North Star problem.
0:07:16 So it’s not like I woke up and say, I have to do a company.
0:07:20 I woke up every day, day after day for the past few years,
0:07:24 thinking that there is so much more than language.
0:07:31 The language is an incredibly powerful encoding of thoughts and information.
0:07:36 But it’s actually not a powerful encoding of what the 3D physical world,
0:07:39 that all animals and living things, living.
0:07:45 And if you look at human intelligence, so much is beyond the realm of language.
0:07:49 Language is a lossy way to capture the world.
0:07:53 And also one subtlety of language is purely generative.
0:07:57 Language doesn’t exist in nature.
0:08:00 We look around, there’s not a syllabus or word.
0:08:06 Whereas the entire physical, perceptual, visual world is there.
0:08:13 And animals’ entire evolutionary history is built upon so much perceptual
0:08:15 and eventually embody intelligence.
0:08:22 Humans, not only do we survive, live, work, but we build civilization beyond language
0:08:25 upon constructing the world and changing the world.
0:08:28 So that’s the problem I want to tackle.
0:08:33 And in order to tackle that problem, obviously, research was important.
0:08:38 I spent years doing that as an academic, and it’s still fun.
0:08:44 But I do realize, and especially talking to Martin, that the time has come
0:08:53 that concentrated, industry-grade effort, focus effort in terms of compute data talent
0:08:56 is really the answer to bringing this to life.
0:08:59 And that’s why I wanted to start World Labs.
0:08:59 Amazing.
0:09:04 Eric, you can do a very simple thought experiment that kind of highlights the difference
0:09:05 between language and space.
0:09:10 So if I put you in a room and I blindfolded you, and I just described the room,
0:09:14 and then I asked you to do a task, the chances of you being able to do it are very little.
0:09:16 I’m like, oh, 10 foot in front of you is like a cop.
0:09:22 I’m like, you know, like this is just, it’s this very inaccurate way to convey reality,
0:09:24 because reality is so complex and it’s so exact, right?
0:09:30 On the other hand, if I took off the blindfold, and you can see the actual space, right?
0:09:33 And what your brain is doing is actually reconstructing the 3D, right?
0:09:36 Then you can actually go and manipulate things and touch things, right?
0:09:39 And so one way to think about it is we do a lot of language processing,
0:09:41 and we use that to communicate, and the high-level ideas, et cetera.
0:09:46 But when it comes to navigating the actual world, we really, really rely on the world itself
0:09:48 and our ability to reconstruct that.
0:09:51 And how and when did you realize that language model weren’t enough?
0:09:53 Because it seems like it’s not super widely known.
0:09:54 I don’t hear about this all the time.
0:10:00 Well, so if you ask me, like, what is this surprising breakthrough?
0:10:04 It’s that language went first, because we’ve worked so hard on robotics, right?
0:10:06 I mean, I feel like even look at autonomous vehicles.
0:10:10 As an industry, we’ve invested, like, $100 billion in it.
0:10:13 I remember when Sebastian Thrun, like, actually won, like, the DARPA Grand Challenge.
0:10:14 2006.
0:10:15 2006.
0:10:17 And we’re like, hooray, AV is done, right?
0:10:22 And then 20 years later, like, we’re finally there, $100 billion in, et cetera.
0:10:23 This is like a 2D problem.
0:10:28 And so that was the path we were going on, is do you actually solve, like, world navigation?
0:10:29 And it’s hard.
0:10:31 And then out of nowhere comes these LLMs.
0:10:34 And they are unit economic positive.
0:10:38 They solve all of these language problems, like, basically immediately.
0:10:40 And so it just took me a moment.
0:10:48 Actually, Faith faced it beautifully early on when we were talking, which is the part of our brain that actually deals with language is actually pretty recent.
0:10:50 And so we’re actually pretty inefficient at it, right?
0:10:53 And so the fact that a computer does it better is not super surprising.
0:10:59 But the part of the brain that actually does the navigation, you know, the spatial has been around, it’s a million brains.
0:11:01 Maybe the reptilian brain has been around four million years.
0:11:02 Well, it’s even more than that.
0:11:03 It’s a trilobite break.
0:11:04 Yeah, yeah, right.
0:11:05 If trilobite had a break.
0:11:05 Right.
0:11:07 500 million years.
0:11:07 Yeah.
0:11:09 So it’s almost like we’re unrolling evolution, right?
0:11:16 So the language part is actually very, very important for, like, high-level concepts and, like, the laptop class type work, which is what it’s impacting right now.
0:11:19 But when it comes to space, and this is everything from robotics.
0:11:22 So anything where you’re trying to construct something physical, you have to solve this problem.
0:11:25 And then we know from AV that it’s a very tough problem.
0:11:28 And then maybe this is what is worth talking about.
0:11:31 Like, the generative wave gave us some insight on how you might want to do it.
0:11:32 So it really felt like that was the time.
0:11:37 My journey is very different because I’ve always been in vision, right?
0:11:41 So I feel like I didn’t need LLM to convince me.
0:11:43 LWM is important.
0:11:45 I do want to say we’re not here bashing language.
0:11:46 I’m just so excited.
0:11:58 In fact, seeing chat GPT and LLMs and these foundation models having such breakthrough success inspires us to realize the moment is closer for world models.
0:12:15 But Martin said it so beautifully, it’s that space, the 3D space, the space out there, the space in your mind’s eye, the spatial intelligence that enable people to do so many things that’s beyond language is a critical part of intelligence.
0:12:24 It goes from ancient animals all the way to humanity’s most innovative findings, such as the structure of DNA, right?
0:12:26 That double helix in 3D space.
0:12:32 There’s no way you can use language alone to reason that out.
0:12:34 So that’s just one example.
0:12:37 Another one of my favorite scientific examples is Buckyball.
0:12:42 Carbon molecule structure that is so beautifully constructed.
0:12:48 That kind of example shows how incredibly profound space and 3D world is.
0:12:50 Let’s paint even more of a picture.
0:12:58 When World Labs has achieved its vision or language world models have achieved their vision, what are some applications or use cases that we can present to the audience to help make it concrete?
0:13:00 Yeah, there is a lot, right?
0:13:02 For example, creativity is very visual.
0:13:09 We have creators from design to movie to architecture to industry design.
0:13:12 Creativity is not just only for entertainment.
0:13:16 It could be for productivity, for machinery, for many things.
0:13:24 That alone is a highly visual, perceptual, spatial area or areas of work.
0:13:26 Of course, we mentioned robotics.
0:13:29 Robotics to me is any embodied machines.
0:13:32 It’s not just humanoids or cars.
0:13:34 There’s so much in between.
0:13:47 But all of them have to somehow figure out the 3D space it lives in, have to be trained to understand the 3D space, and have to do things, sometimes even collaboratively with humans.
0:13:50 And that needs spatial intelligence.
0:14:05 And, of course, I think one thing that’s very exciting for me is that for the entirety of human civilization, we all collectively, as people, lived in one 3D world.
0:14:09 And that is the physical Earth’s 3D world.
0:14:13 A few of us went to the moon, but, you know, it’s a very small number.
0:14:15 But that’s one world.
0:14:19 But that’s what makes the digital virtual world incredible.
0:14:25 With this technology, which we should talk about, it’s the combination of generation and reconstruction.
0:14:29 Suddenly, we can actually create infinite universes.
0:14:32 Some are for robots.
0:14:33 Some are for creativity.
0:14:35 Some are for socialization.
0:14:36 Some are for travel.
0:14:38 Some are for storytelling.
0:14:42 It suddenly will enable us to live in a multiverse way.
0:14:44 The imagination is boundless.
0:14:50 I think it’s very important because these conversations can sound abstract, but they’re actually not.
0:14:55 But the reason they sound abstract is because it’s truly horizontal, just like LLMs are, right?
0:14:56 So, like, if you guys say, like, what are LLMs good at?
0:14:59 The same LLM we use for, like, an emotional conversation.
0:15:01 We use it to write code.
0:15:02 We use it to do lists.
0:15:04 We use it for self-actualization, right?
0:15:09 And so, I think we can get actually pretty concrete about what these models do, right?
0:15:12 And so, let me just give it a shot, and then Fefe is the expert, of course.
0:15:16 So, with these models, you can take a view of the world, like a 2D view of the world,
0:15:22 and then you can actually create a 3D full representation, including what you’re not seeing,
0:15:25 like the back of the table, for example, within the computer.
0:15:27 So, given just a 2D view, you have the full thing.
0:15:30 And then you ask, okay, well, what can you do with that thing, for example?
0:15:32 Well, you can manipulate it.
0:15:32 You can move it.
0:15:33 You can measure it.
0:15:33 You can stack.
0:15:36 So, anything that you would do a space, you could do, right?
0:15:37 I mean, you could do architecture.
0:15:38 You could do design.
0:15:42 But it turns out the ability to fill out the back of the table means that you can fill out
0:15:43 stuff that was never there to begin with, right?
0:15:45 So, let’s say that I just had a 2D picture of this.
0:15:48 I could create a 360 of everything, right?
0:15:50 And so, now you have fully generative.
0:15:51 And so, what does that mean?
0:15:52 That means that’s video games.
0:15:53 That’s creativity.
0:16:00 And so, it’s a super horizontal piece that takes, basically, a computer with a single view
0:16:04 in the world or maybe multiple views in the world and creates a full 3D representation that
0:16:05 that computer then can act on.
0:16:10 And so, you can see that that’s a very concrete, pivotal thing from everything from, like, robotics
0:16:12 to video games to art and design.
0:16:13 Yeah.
0:16:17 It seems like we haven’t fully been appreciating sort of the 3D components until now.
0:16:17 Is that fair to say?
0:16:19 It is fair to say.
0:16:22 In fact, I think it took evolution a long time.
0:16:25 3D is not an easy problem.
0:16:31 But I always come back to the fact that I had a conversation with my six-year-old years ago
0:16:34 about why trees don’t have eyes.
0:16:37 And the fundamental thing is trees don’t move.
0:16:38 They don’t need eyes.
0:16:48 So, the fact that the entire basis of animal life is moving and doing things and interacting
0:16:51 gives life to perception and spatial intelligence.
0:16:52 Yeah.
0:16:59 And in turn, spatial intelligence is going to reinvent horizontally, as Martin said,
0:17:04 so many of the way of work and life that humans are doing.
0:17:04 Yeah.
0:17:04 Fascinating.
0:17:08 But it is definitely worth asking the question, why can’t you just use 2D video for this, right?
0:17:11 Like, 3D is very, very fundamental to this.
0:17:13 Vivi, you suggested let’s get deeper into the technology.
0:17:17 What can we share more about how it works or what the breakthrough is or what’s worth commenting
0:17:17 on the technology?
0:17:21 To Martine’s point, does it need to be 3D or why can’t you just use 2D?
0:17:25 I think you can do a lot of things using 2D.
0:17:30 But the fact is that 2D will get you very far.
0:17:37 In fact, today’s multimodal LLMs is already making a big difference in the robotic learning
0:17:42 world, helping, guiding you to know what’s next, the state of the world.
0:17:48 But fundamentally, physics happens in 3D and interaction happens in 3D.
0:17:52 Navigating behind the back of the table needs to happen in 3D.
0:17:58 Composing the world, whether physically, digitally, needs to happen in 3D.
0:18:01 So fundamentally, the problem is a 3D problem.
0:18:07 One way to think about it is if it’s a human being looking at, say, a 2D video, the human
0:18:09 being can reconstruct the 3D in their head, right?
0:18:13 But let’s say I’ve got a robot that has the output of the model.
0:18:17 If that’s 2D and then you ask the robot to do, I don’t know, distance or to grab something,
0:18:19 that information is missing.
0:18:20 You’ve got the XYZ plane.
0:18:22 The Z plane just isn’t there at all, right?
0:18:27 And so for many things that are spatial, you need to provide that information to the
0:18:30 computer so that you can actually navigate in 3D space.
0:18:34 And so 2D video is great if it’s a human because we already can turn it into 3D.
0:18:37 But like for any computer program, it’ll need to be 3D.
0:18:39 Actually, I want to tell you a personal story.
0:18:46 About five years ago, ironically, I lost my stereo vision for a few months because I had a cornea
0:18:50 injury, and that means I was literally seen with one eye.
0:18:54 And like Martin said, my whole life has been trained with stereo vision.
0:18:59 So even if I was seen with one eye, I kind of know what the 3D world looked like.
0:19:05 But it was a fascinating period as a computer vision scientist for me to experiment what the
0:19:06 world is.
0:19:11 And one thing that truly drove home literally was I was frightened to drive.
0:19:12 Wow.
0:19:14 First of all, I couldn’t get on highway.
0:19:16 That speed, I could not, you know.
0:19:23 But I was just driving in my own neighborhood, and I realized I don’t have a good distance measure
0:19:27 between my car and the parked car on a local small road.
0:19:33 Even though I have perfect understanding of how big is my car almost, how big is the neighbors,
0:19:34 the parked cars.
0:19:39 I know the roads for years and years, but just driving there, I had to be so slow, like
0:19:43 almost 10 miles an hour, so that I don’t scratch the cars.
0:19:44 Wow.
0:19:47 And that was exactly why we needed stereo vision.
0:19:51 That’s actually a great articulation of why 3D is just actually key if you’re doing some
0:19:52 processing, right?
0:19:52 Yeah.
0:19:58 So I don’t recommend it, but if you’re steering, park your car one and drive your car two
0:19:59 with one eye and feel it.
0:20:00 That’s your own car.
0:20:04 On the tech side, with LLMs, a lot of the research was done at the big companies.
0:20:06 What’s the state of the research here?
0:20:12 This is definitely a newer area of research compared to LLM.
0:20:17 It’s not totally fair to say new, because in computer vision as a field, we have been doing
0:20:17 bits and pieces.
0:20:24 For example, one important revolution that has happened in 3D computer vision was a neural
0:20:26 radian field, or NERF.
0:20:32 And that was done by our co-founder, Ben Mildenhall, and his colleagues at Berkeley.
0:20:39 And that was a way to do 3D reconstruction using deep learning that was really taking the
0:20:42 world by storm about four years ago.
0:20:49 We’ve also got a co-founder, Christoph Lassner, whose pioneering work was part of the reason
0:20:55 Gaussian splat representation started to, again, become really popular as a way to represent
0:20:56 volumetric 3D.
0:21:03 And of course, Justin Johnson, who was my former student, also co-founder of World Labs, were
0:21:10 among the first generation of deep learning computer vision students who did so much foundational
0:21:17 work in image generation when, before Transformers were out, we were using GANs to do image generation
0:21:25 and then style transfer, which was really popularized some of the components or ingredients of what
0:21:26 we’re doing here.
0:21:33 So things were happening in academia, things were happening in industry, but I agree what
0:21:40 is exciting now is that at World Lab, we just have the conviction that we’re going to be all
0:21:49 in on this one singular big North Star problem, concentrating on the world’s smartest people in computer vision,
0:21:58 in diffusion models, in computer graphics, in optimization, in AI, in data, all of them come
0:22:03 into this one team and try to make this work and to productize this.
0:22:08 I will say from an outsider standpoint, and so I’m not an expert in any of these spaces, but it really
0:22:14 feels like to solve this problem, you need experts both in AI, and that’s like the data and the models,
0:22:19 like the actual model architecture, and graphics, which is like how do you actually represent these
0:22:22 things in memory in a computer and then on the screen.
0:22:27 So it’s a very special team to actually crack this problem, which Fei-Fei’s managed to put together.
0:22:29 Well, that’s an inspiring note to wrap on.
0:22:31 Fei-Fei, thank you so much for joining us.
0:22:31 Thank you.
0:22:32 Thank you, Eric.
0:22:37 Thanks for listening to the A16Z podcast.
0:22:42 If you enjoyed the episode, let us know by leaving a review at ratethispodcast.com slash A16Z.
0:22:45 We’ve got more great conversations coming your way.
0:22:46 See you next time.

What if the next leap in artificial intelligence isn’t about better language—but better understanding of space?

In this episode, a16z General Partner Erik Torenberg moderates a conversation with Fei-Fei Li, cofounder and CEO of World Labs, and a16z General Partner Martin Casado, an early investor in the company. Together, they dive into the concept of world models—AI systems that can understand and reason about the 3D, physical world, not just generate text.

Often called the “godmother of AI,” Fei-Fei explains why spatial intelligence is a fundamental and still-missing piece of today’s AI—and why she’s building an entire company to solve it. Martin shares how he and Fei-Fei aligned on this vision long before it became fashionable, and why it could reshape the future of robotics, creativity, and computational interfaces.

From the limits of LLMs to the promise of embodied intelligence, this conversation blends personal stories with deep technical insights—exploring what it really means to build AI that understands the real (and virtual) world.

Resources: 

Find Fei-Fei on X: https://x.com/drfeifei

Find Martin on X: https://x.com/martin_casado

Learn more about World Labs: https://www.worldlabs.ai/

 

Stay Updated: 

Let us know what you think: https://ratethispodcast.com/a16z

Find a16z on Twitter: https://twitter.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Subscribe on your favorite podcast app: https://a16z.simplecast.com/

Follow our host: https://x.com/eriktorenberg

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Leave a Reply

Your email address will not be published. Required fields are marked *