What Comes After ChatGPT? The Mother of ImageNet Predicts The Future

0
0
AI transcript
0:00:03 I think the whole history of deep learning is, in some sense, the history of a scaling
0:00:04 up compute.
0:00:09 When I graduated from grad school, I really thought the rest of my entire career would
0:00:14 be towards solving that single problem, which is .
0:00:20 A lot of AI as a field, as a discipline, is inspired by human intelligence.
0:00:22 We thought we were the first people doing it.
0:00:26 It turned out that was also simultaneously doing it.
0:00:31 One way of looking at it, it’s a generative model of 3D worlds.
0:00:35 You can input things like text, or image, or multiple images, and it will generate for
0:00:38 you a 3D world that matches those inputs.
0:00:43 So while Marble is simultaneously a world model that is building towards this vision of spatial
0:00:47 intelligence, it was also very intentionally designed to be a thing that people could find
0:00:49 useful today.
0:00:55 And we’re starting to see emerging use cases in gaming, in VFX, in film, where I think there’s
0:00:59 a lot of really interesting stuff that Marble can do today as a product, and then also set
0:01:04 a foundation for the grand world models that we want to build going into the future.
0:01:11 Feifei Li is a Stanford professor, the co-director of the Stanford Institute for Human-Centered
0:01:14 Artificial Intelligence, and co-founder of WorldLabs.
0:01:19 She created ImageNet, the dataset that sparked the deep learning revolution.
0:01:25 Justin Johnson is her former PhD student, ex-professor at Michigan, ex-meta research, and now co-founder
0:01:27 of WorldLabs.
0:01:32 Together, they just launched Marble, the first model that generates explorable 3D worlds from
0:01:33 texture images.
0:01:38 In this episode, Feifei and Justin explore why spatial intelligence is fundamentally different
0:01:43 from language, what’s missing from current world models, HIT, physics, and the architectural
0:01:48 insight that transformers are actually set models, not sequence models.
0:01:52 Hey everyone, welcome to the Latent Space podcast.
0:01:56 This is Alessio, founder of Kernel Labs, and I’m joined by Swift’s editor of Latent Space.
0:02:01 And we are so excited to be in the studio with Feifei and Justin of WorldLabs.
0:02:02 Welcome.
0:02:03 We’re excited too.
0:02:04 I nearly said Marble.
0:02:06 Yeah, thanks for having us.
0:02:10 I think there’s a lot of interest in world models, and you’ve done a little bit of publicity
0:02:12 around spatial intelligence and all that.
0:02:17 I guess maybe one of the part of the story that is a rare opportunity for you to tell
0:02:20 is how you two came together to start building WorldLabs.
0:02:24 That’s very easy because Justin was my former student.
0:02:24 Yeah.
0:02:31 So Justin came to my, you know, in my, the other hat I wear is a professor of computer science
0:02:32 at Stanford.
0:02:34 Justin joined my lab when?
0:02:35 Which year?
0:02:35 2012.
0:02:40 Actually, the semester that I, the quarter that I joined your lab was the same quarter that
0:02:41 AlexNet came out.
0:02:42 Yeah, yeah.
0:02:44 So Justin is my first.
0:02:46 Were you involved in the whole announcement drama?
0:02:47 No, no, not at all.
0:02:51 But I was sort of watching all the ImageNet excitement around AlexNet at that quarter.
0:02:56 So he was my, one of my very best students.
0:03:04 And then he went on to have a very successful early career as a professor in Michigan, University
0:03:06 of Michigan, Ann Arbor in Meta.
0:03:14 And then when we, I think around, you know, more than two years ago, for sure, I think
0:03:20 both independently, both of us have been looking at the development of the large models and thinking
0:03:22 about what’s beyond language models.
0:03:29 And this idea of building world models, spatial intelligence really was natural for us.
0:03:38 So we started talking and decided that we should just put all the eggs in one basket and focus on solving
0:03:40 this problem and started world apps together.
0:03:41 Yeah, pretty much.
0:03:47 I mean, like, after that, seeing that kind of ImageNet era during my PhD, I had the sense that
0:03:51 the next sort of decade of computer vision was going to be about getting, getting AI out of the,
0:03:53 out of the data center and out into the world.
0:03:58 So a lot of my interests post-PhD kind of shifted into 3D vision, a little bit more into
0:04:01 computer graphics, more into generative modeling.
0:04:05 And I was, I thought I was kind of drifting away from my advisor post-PhD.
0:04:08 But then when we reunited a couple of years later, it turned out she was thinking of very similar
0:04:09 things.
0:04:13 So if you think about AlexNet, the core pieces of it were obviously ImageNet.
0:04:16 It was the move to GPUs and neural networks.
0:04:20 How do you think about the AlexNet equivalent model for world models?
0:04:22 In a way, it’s an idea that has been out there, right?
0:04:26 There’s been, you know, Young Lagoon is maybe like the most, the biggest proponent, most
0:04:26 prominent of it.
0:04:31 What have you seen in the last two years that you were like, hey, now’s the time to do this?
0:04:36 And what are maybe the things fundamentally that you want to build as far as data and kind
0:04:41 of like maybe different types of algorithms or approaches to compute to make world models
0:04:42 really come to life?
0:04:47 Yeah, I think one is just there is a lot more data and compute generally available.
0:04:51 I think the whole history of deep learning is in some sense the history of scaling up compute.
0:04:57 And if we think about, you know, AlexNet required this jump from CPUs to GPUs, but even from AlexNet
0:05:01 to today, we’re getting about a thousand times more performance per card than we had in AlexNet
0:05:02 days.
0:05:06 And now it’s common to train models not just on one GPU, but on hundreds or thousands or
0:05:08 tens of thousands or even more.
0:05:12 So the amount of compute that we can marshal today on a single model is, you know, about a
0:05:14 million fold more than we could have even at the start of my PhD.
0:05:19 So I think language was one of the really interesting things that started to work quite
0:05:20 well the last couple of years.
0:05:25 But as we think about moving towards visual data and spatial data and world data, you just
0:05:26 need to process a lot more.
0:05:30 And I think that’s going to be a good way to soak up this new compute that’s coming online
0:05:31 more and more.
0:05:37 Does the model of having a public challenge still work or should it be centralized inside
0:05:38 of a lab?
0:05:42 I think open science still is important.
0:05:49 You know, AI obviously compared to the image that AlexNet time has really evolved.
0:05:53 That was such a niche computer science discipline.
0:05:58 Now it’s just like civilizational technology.
0:06:00 But I’ll give you an example, right?
0:06:08 Recently, my Stanford lab just announced an open data set and benchmark called Behavior, which
0:06:13 is for benchmarking robotic learning in simulated environments.
0:06:23 And that is a very clear effort in still keeping up this open science model of doing things, especially
0:06:25 as in academia.
0:06:30 But I think it’s important to recognize the ecosystem is a mixture, right?
0:06:40 I think a lot of the very focused work in industry, some of them are more seeing the daylight in
0:06:44 the form of a product rather than an open challenge, per se.
0:06:44 Yeah.
0:06:47 And that’s just a matter of the funding and the business model.
0:06:49 Like you have to see some ROI from it.
0:06:56 I think it’s just a matter of the diversity of the ecosystem, right?
0:07:03 Even during the so-called AlexNet time, I mean, there were closed models, there were proprietary
0:07:11 models, there were open models, you know, or you think about iOS versus Android, right?
0:07:12 They’re different business model.
0:07:15 I wouldn’t say it’s just a matter of funding per se.
0:07:17 It’s just how the market is.
0:07:18 They’re different plays.
0:07:19 Yeah.
0:07:24 But do you feel like you could redo ImageNet today with the commercial pressure that some
0:07:25 of these labs has?
0:07:27 I mean, to me, that’s like the biggest question, right?
0:07:32 It’s like, what can you open versus what should you keep inside?
0:07:35 Like, you know, if I put myself in your shoes, right, it’s like, you raise a lot of money,
0:07:36 you’re building all of this.
0:07:41 If you had the best data set for this, what incentives do you really have to publish it?
0:07:47 And it feels like the people at the labs are getting more and more pulled in the PhD programs
0:07:49 are getting pulled earlier and earlier into these labs.
0:07:53 So I’m curious if you think there’s like an issue right now with like how much money is
0:07:58 at stake and how much pressure it puts on like the more academia open research space, or if
0:08:00 you feel like that’s not really a concern.
0:08:05 I do have concerns about less about the pressure.
0:08:11 It’s more about the resourcing and the imbalance, the resourcing of academia.
0:08:14 This is a little bit of a different conversation from world labs.
0:08:22 You know, I have been the past few years advocating for resourcing the healthy ecosystem, you know,
0:08:30 as the founding director, co-director of Stanford’s Institute for Human-Centered AI, Stanford High.
0:08:39 I’ve been, you know, working with policy makers about resourcing public sector and academic AI work, right?
0:08:47 We work with the first Trump administration on this bill called National AI Research Resource, NAIR bill,
0:08:56 which is scoping out a national AI compute cloud, as well as data repository.
0:09:04 And I also think that open source, open data sets continue to be important part of the ecosystem.
0:09:13 Like I said, right now in my Stanford lab, we are doing the open data set, open benchmark on robotic learning called behavior.
0:09:16 And many of my colleagues are still doing that.
0:09:19 I think that’s part of the ecosystem.
0:09:28 I think what the industry is doing, some startups are doing, are running fast with models, creating products, is also a good thing.
0:09:37 For example, when Justin was a PhD student with me, none of the computer vision programs work that well, right?
0:09:38 We could write beautiful papers.
0:09:39 Justin has seen beautiful.
0:09:48 And actually, even before grad school, like I wanted to do computer vision and I reached out to a team at Google and like wanted to potentially go and try to do computer vision like out of undergrad.
0:09:50 And they told me like, what are you talking about?
0:09:51 Like, you can’t do that.
0:09:53 Like, go to a PhD first and come back.
0:09:56 What was the motivation that got you so interested?
0:10:00 I had done some computer vision research during my undergrad with actually Fei-Fei’s PhD advisor.
0:10:02 There’s a lineage.
0:10:03 Yeah, there’s a lineage here.
0:10:08 So I had done some computer vision even as an undergrad and I thought it was really cool and I wanted to keep doing it.
0:10:15 So then I was sort of faced with this sort of industry academia choice even coming out of undergrad that I think a lot of people in the research community are facing now.
0:10:23 But to your question, I think like the role of academia, especially in AI, has shifted quite a lot in the last decade.
0:10:24 And it’s not a bad thing.
0:10:29 It’s a sense of, it’s because the technology has grown and emerged, right?
0:10:35 Like five or ten years ago, you really could train state-of-the-art models in the lab, even with just a couple of GPUs.
0:10:42 But, you know, because that technology was so successful and scaled up so much, then you can’t train state-of-the-art models with a couple of GPUs anymore.
0:10:43 And that’s not a bad thing.
0:10:44 It’s a good thing.
0:10:45 It means the technology actually worked.
0:10:50 But that means the expectations around what we should be doing as academics shifts a little bit.
0:10:54 And it shouldn’t be about trying to train the biggest model and scaling up the biggest thing.
0:10:59 It should be about trying wacky ideas and new ideas and crazy ideas, most of which won’t work.
0:11:01 And I think there’s a lot to be done there.
0:11:15 If anything, I’m worried that too many people in academia are hyper-focused on this notion of trying to pretend like we can train the biggest models or treating it as almost a vocational training program to then graduate and go to a big lab and be able to play with all the GPUs.
0:11:23 I think there’s just so much crazy stuff you can do around like new algorithms, new architectures, like new systems that, you know, there’s a lot you can do as one person.
0:11:31 And also, just academia has a role to play in understanding the theoretical underpinning of these large models.
0:11:33 We still know so little about this.
0:11:39 Or extend to the interdisciplinary, you know, Justin calls wacky ideas.
0:11:41 There’s a lot of basic science ideas.
0:11:44 There’s a lot of blue sky problems.
0:11:46 So, I agree.
0:11:52 I don’t think the problem is open versus closed, productization versus open sourcing.
0:11:58 I think the problem right now is that academia by itself is severely under-resourced.
0:12:07 So, that, you know, the researchers and the students do not have enough resources to try these ideas.
0:12:08 Yeah.
0:12:13 Just for people to nerd snipe, what’s a wacky idea that comes to mind when you talk about wacky ideas?
0:12:22 Oh, like I had this idea that I kept pitching to my students at Michigan, which is that I really like hardware and I really like like new kinds of hardware coming online.
0:12:31 And in some sense, the emergence of the neural networks that we use today and transformers are really based around matrix multiplication because matrix multiplication fits really well with GPUs.
0:12:41 But if we think about how GPUs are going to scale, how hardware is likely to scale in the future, I don’t think the current system that we have, like the GPU, like hardware design, is going to scale infinitely.
0:12:46 And we start to see that even now that like the unit of compute is not the single device anymore.
0:12:48 It’s this whole cluster of devices.
0:12:49 So, if you imagine.
0:12:49 A node.
0:12:51 Yeah, it’s a node or a whole cluster.
0:12:58 But the way we talk about neural networks is still as if they are a monolithic thing that could be coded like in one GPU in PyTorch.
0:13:00 But then in practice, they could distribute over thousands of devices.
0:13:07 So, are there like just as, you know, transformers are based around MatMul and MatMul is sort of the primitive that works really well on GPUs.
0:13:13 As you imagine hardware scaling out, are there other primitives that make more sense for large scale distributed systems that we could build our neural networks on?
0:13:22 And I think it’s possible that there could be drastically different architectures that fit with the next generation or like the hardware that’s going to come 10 or 20 years down the line.
0:13:24 And we could start imagining that today.
0:13:38 It’s really hard to make those kinds of bets because there’s also the concept of the hardware lottery where let’s just say, you know, NVIDIA has won and we should just, you know, scale that out in infinity and write software to patch up any gaps we have in the mix, right?
0:13:39 I mean, yes, yes and no.
0:13:44 Like if you look at the, if you look at the numbers, like even going from Hopper to Blackwell, like the performance per watt is about the same.
0:13:45 Yes.
0:13:50 They mostly make the number of transistors go up and they make the chip size go up and they make the power usage go up.
0:13:56 But even from Hopper to Blackwell, we’re kind of already seeing like a scaling limit in terms of what is the, what is the performance per watt that we can get.
0:14:00 So, I think, I think there are, there is room to do something new.
0:14:04 And I don’t know exactly what it is and I don’t think you can get it done like in a three month cycle as a startup.
0:14:10 But I think that’s the kind of idea that if you sit down and sit with for a couple of years, like maybe you could come up, come up with some breakthroughs.
0:14:13 And I think that’s the kind of long range stuff that is a perfect match for academia.
0:14:24 Coming back to the little bit of background in history, we have this sort of research note on the scene storytelling work that you did or neuro image captioning that you did with Andre.
0:14:33 And I just wanted to hear you guys tell that story about, you know, you were like sort of embarking on that for your PhD and Fei, you like having that reaction that you had.
0:14:39 Yeah, so I think that line of work started between me and Andre and then Justin joined, right?
0:14:43 So, Andre started his PhD.
0:14:48 He and I were looking at what is beyond ImageNet object recognition.
0:14:58 And at that time, we, you know, the convolutional neural network was, has proven some power in ImageNet tasks.
0:15:02 So, ConfNet is a great way to represent images.
0:15:11 In the meantime, I think in the language space, a early sequential model is called LSTM was also being experimented.
0:15:18 So, Andre and I were just talking about this has been a long-term dream of mine.
0:15:23 I thought it would take 100 years to solve, which is telling the story of images.
0:15:37 When I graduated from grad school, I really thought the rest of my entire career would be towards solving that single problem, which is given a picture or given a scene, tell the story in natural language.
0:15:51 But things evolved so fast, when Andre and I, when Andre started, we’re like maybe combining the representation of convolutional neural network as well as the language sequential model of LSTM.
0:15:58 And we, we might be able to learn through training to match caption with images.
0:16:02 So, that’s when we started that line of work.
0:16:06 And I don’t remember, it was 2014 or 2015.
0:16:08 It was a CBPR 2015 was the captioning paper.
0:16:18 So, it was our first paper that Andre got it to work that was, you know, given an image.
0:16:28 The image is represented with ConvNet, the language model is the LSTM model, and then we combine it, and it’s able to generate one sentence.
0:16:30 And that was one of the first time.
0:16:33 It was pretty, I think I wrote it in my book.
0:16:36 We thought we were the first people doing it.
0:16:40 It turned out that Google at that time was also simultaneously doing it.
0:16:49 And a reporter, it was John Markov from New York Times, was breaking the Google story, but he by accident heard about us.
0:16:55 And then he realized that we really independently got there together at the same time.
0:16:59 So, he wrote the story of both the Google research as well as Andre and my research.
0:17:04 But after that, I think Justin was already in the lab at that time.
0:17:05 Yeah, yeah.
0:17:12 I remember the group meeting where Andre was presenting some of those results and explaining this new thing called LSTMs and RNNs that I had never heard of before.
0:17:14 And I thought like, wow, this is really amazing stuff.
0:17:15 I want to work on that.
0:17:20 So, then he had the paper at CBPR 2015 on the first image captioning results.
0:17:22 Then after that, we started working together.
0:17:30 And we did a, first we did a paper actually just on language modeling back in 2015, ICLEAR 2015.
0:17:32 Yeah, I should have stuck with language modeling.
0:17:34 That turned out pretty lucrative in retrospect.
0:17:40 But we did this language modeling paper together, me and Andre, in 2015, where it was like really cool.
0:17:49 We trained these little RNN language models that could, you know, spit out a couple sentences at a time and poke at them and try to understand what the neurons inside the neural network, inside the things we’re doing.
0:17:53 You guys were doing analysis on the different, like, memory and…
0:17:55 Yeah, yeah, it was really cool.
0:18:00 And even at that time, we had these results where you could, like, look inside the LSTM and say, like, oh, this thing is reading code.
0:18:06 So, one of the, like, one of the data sets that we trained on for this one was the Linux source code, right?
0:18:09 Because the whole thing is, you know, open source and you could just download this.
0:18:12 So, we trained an RNN on this data set.
0:18:21 And then, as the network is trying to predict the tokens there, then, you know, try to correlate the kinds of predictions that it’s making with the kind of internal structures in the RNN.
0:18:29 And there, we were able to find some correlations between, oh, like, this unit in this layer of the LSTM fires when there’s an open paren and then, like, turns off when there’s a closed paren.
0:18:33 And try to do some empirical stuff like that to figure it out.
0:18:34 So, that was pretty cool.
0:18:40 And that was just, like, that was kind of, like, cutting out the CNN from this language modeling part and just looking at the language models in isolation.
0:18:44 But then we wanted to extend the image captioning work.
0:18:52 And remember, at that time, we even have a sense of space because we feel like captioning does not capture different parts of the image.
0:19:06 So, I was talking to Justin and Andre about, can you go what we end up calling dense captioning, which is, you know, describe the scene in greater details, especially different parts of the scene.
0:19:07 So, that’s…
0:19:07 Yeah.
0:19:09 And so, then we built this system.
0:19:15 So, then it was me and Andre and Feifei on a paper the following year, CVPR, so in 2016, where we built this system that did dense captioning.
0:19:21 So, you input a single image and then it would draw boxes around all the interesting stuff in the image and then write a short snippet about each of them.
0:19:23 It’s like, oh, it’s a green water bottle on the table.
0:19:24 It’s a person wearing a black shirt.
0:19:35 And this was a really complicated neural network because that was built on a lot of advancements that had been made in object detection around that time, which was a major topic in computer vision for a long time.
0:19:40 And then it was actually, like, one joint neural network that was both, you know, learning to look at individual images.
0:19:44 Because it actually had, like, then three different representations inside this network.
0:19:48 One was the representation of the whole image to kind of get the gestalt of what’s going on.
0:19:53 Then it would propose individual regions that it wants to focus on and then look at, you know, represent each region independently.
0:19:56 And then once you look at the region, then you need to spit out text for each region.
0:19:58 So, that was a pretty complicated neural network architecture.
0:20:00 This was all pre-PyTorch.
0:20:01 And does it do it in one pass?
0:20:02 Yeah, yeah.
0:20:03 So, it was a single forward pass that did all of it.
0:20:07 Not only it was doing it in one pass, you also optimized inference.
0:20:10 You’re doing it on a webcam, I remember.
0:20:18 Yeah, so I had built this, like, crazy real-time demo where I had the network running, like, on a server at Stanford.
0:20:22 And then a web front end that would stream from a webcam and then, like, send the image back to the server.
0:20:24 The server would run the model and stream the predictions back.
0:20:31 So, I was just, like, walking around the lab with this laptop that would just, like, show people this, like, this network running real-time.
0:20:33 Identification and labeling as well.
0:20:34 Yeah, yeah, yeah.
0:20:34 Oh, my God.
0:20:42 It was pretty impressive because most of my graduate students would be satisfied if they can publish the paper, right?
0:20:45 They packaged the research, put it in a paper.
0:20:47 But Justin went a step further.
0:20:50 He’s like, I want to do this real-time web demo.
0:20:52 Well, actually, I don’t know if I told you this story.
0:20:58 But then we had, there was a conference that year in Santiago at ICCV, it was ICCV 15.
0:21:01 And then, like, I had a paper at that conference for something different.
0:21:06 But I had my laptop, I was, like, walking around the conference with my laptop, showing everybody this, like, real-time captioning demo.
0:21:09 And the model was running on a server in California.
0:21:13 So, it was, like, actually able to stream, like, all the way from California down to Santiago.
0:21:15 Well, latency, it was terrible.
0:21:17 It was, like, one FPS.
0:21:20 But the fact that it worked at all was pretty amazing.
0:21:24 So, I was going to briefly quip that, you know, maybe vision and language modeling are not that different.
0:21:33 You know, DeepSQL CR recently tried the crazy thing of let’s language, let’s model text from pixels and just, like, train on that.
0:21:35 And it might be the future.
0:21:35 I don’t know.
0:21:39 I don’t know if you guys have any takes on whether language is actually necessary at all.
0:21:42 I just wrote a whole manifesto.
0:21:42 Yeah.
0:21:45 This is my segue into this.
0:21:46 Yes.
0:21:48 I think they are different.
0:21:57 I do think the architecture of these generative models will share a lot of shareable components.
0:22:12 But I think the deeply 3D, 4D spatial world has a level of structure that is fundamentally different from a purely generative signal that is one-dimensional.
0:22:13 Yeah.
0:22:16 I think there’s something to be said for pixel maximalism, right?
0:22:19 Like, there’s this notion that language is this different thing.
0:22:21 But we see language with our eyes.
0:22:24 And our eyes are just, like, you know, basically pixels, right?
0:22:28 Like, we’ve got sort of biological pixels in the back of our eyes that are processing these things.
0:22:31 And, you know, we see text and we think of it as this discrete thing.
0:22:33 But that really only exists in our minds.
0:22:39 Like, the physical manifestation of text and language in our world are, you know, physical objects that are printed on things in the world.
0:22:40 And we see it with our eyes.
0:22:42 Well, you can also think it’s sound.
0:22:49 But even sound, you can translate into a corellogram, which is a 2D signal.
0:22:49 Right.
0:22:54 And then, like, you actually lose something if you translate to this, like, purely tokenized representations that we use in LLM.
0:22:55 Right.
0:22:56 Like, you lose the font.
0:22:58 You lose the line breaks.
0:23:00 You lose sort of the 2D arrangement on the page.
0:23:03 And for a lot of cases, for a lot of things, maybe that doesn’t matter.
0:23:04 But for some things, it does.
0:23:10 And I think pixels are this sort of more lossless representation of what’s going on in the world.
0:23:17 And in some ways, a more general representation that more matches what we humans see as we navigate the world.
0:23:24 So, like, if there’s an efficiency argument to be made, like, maybe it’s not super efficient to, like, you know, render your text to an image and then feed that to a vision model.
0:23:25 That’s exactly what DeepSeq did.
0:23:27 It was, like, kind of worked.
0:23:31 I think this ties into the whole world model.
0:23:37 Like, one of my favorite papers that I saw this year was about inductive bias to probe for world models.
0:23:42 So, it was a Harvard paper where they fed a lot of, like, orbital patterns into an LLM.
0:23:46 And then they asked the LLM to predict the orbit of a planet around the sun.
0:23:49 And the model generated looked good.
0:23:54 But then if you asked it to draw the force vectors, it would be all wacky.
0:23:55 You know, it wouldn’t actually follow it.
0:24:00 So, how do you think about what’s embedded into the data that you get?
0:24:04 And then we can talk about maybe organizing for 3D world models.
0:24:06 Like, what are, like, the dimensions of information?
0:24:14 There’s the visual, but, like, how much of, like, the underlying hidden forces, so to speak, you need to extract out of this data?
0:24:16 And, like, what are some of the challenges there?
0:24:18 Yeah, I think there’s different ways you could approach that problem.
0:24:26 One is, like, you could try to be explicit about it and say, like, oh, I want to, you know, measure all the forces and feed those as training data to your model.
0:24:26 Right?
0:24:36 You could, like, sort of run a traditional physics simulation and, you know, then know all the forces in the scene and then use those as training data to train a model that’s now going to hopefully predict those.
0:24:38 Or you could hope that something emerges more latently.
0:24:39 Right?
0:24:50 That you kind of train on something end-to-end and then on a more general problem and then hope that somewhere, something in the internals of the model must learn to model something like physics in order to make the proper predictions.
0:24:53 And those are kind of the two big paradigms that we have more generally.
0:25:03 But there’s no indication that those latent modeling will get you to a causal law of space and dynamics.
0:25:04 Right?
0:25:10 That’s where today’s deep learning and human intelligence actually start to bifurcate.
0:25:13 Because fundamentally, the deep learning is still fitting patterns.
0:25:23 There you sort of get philosophical and you say that, like, we’re trying to fit patterns too, but maybe we’re trying to fit, you know, a more broad array of patterns, like, with a longer time horizon, a different reward function.
0:25:32 But, like, basically the paper you mentioned is sort of, you know, that problem, that it learns to fit the specific patterns of orbits, but then it doesn’t actually generalize in the way that you’d like.
0:25:33 It doesn’t have a sort of causal model of gravity.
0:25:34 Right.
0:25:39 Because even in marble, you know, I was trying it and it generates this beautiful sceneries and there’s, like, arches in them.
0:25:50 But does the model actually understand how, you know, the arch is actually, you know, drawing on the center kind of like stone and, like, you know, the actual physical structure of it?
0:25:59 And the other question is, like, does it matter that it does understand it as long as it always renders something that would fit the physical model that we imagine?
0:26:06 If you use the word understand the way you understand, I’m pretty sure the model doesn’t understand it.
0:26:10 The model is learning from the data, learning from the pattern.
0:26:12 Yeah.
0:26:17 Does it matter, especially for the use cases for, it’s a good question, right?
0:26:24 Like, for now, I don’t think it matters because it renders out what you need, assuming it’s perfect.
0:26:26 Yeah, I mean, it depends on the use case.
0:26:33 Like, if the use case is I want to generate sort of a backdrop for virtual film or production or something like that, all you need is something that looks plausible.
0:26:35 And in that case, probably it doesn’t matter.
0:26:41 But if you’re going to use this to, like, you know, if you’re an architect and you’re going to use this to design a building that you’re then going to go build in the real world,
0:26:46 then, yeah, it does matter that you model the forces correctly because you don’t want the thing to break when to actually build it.
0:27:04 But even there, right, like, even if your model has the semantics in it, let’s say, I still don’t think the understanding of the signal or the output on the model part and the understanding on the human part is a different word.
0:27:07 But this gets, again, philosophical.
0:27:09 Yeah, I mean, there’s this trick with understanding, right?
0:27:12 Like, these models are a very different kind of intelligence than human intelligence.
0:27:20 And human intelligence is interesting because, you know, you know, I think that I understand things because I can introspect my own thought process to some extent.
0:27:32 And then I believe that my thought process probably works similar to other people’s so that when I observe someone else’s behavior, then I infer that their internal mental state is probably similar to my own internal mental state that I’ve observed.
0:27:35 And therefore, I know that I understand things.
0:27:36 So there, I assume that you understand something.
0:27:42 But these models are sort of like this alien form of intelligence where they can do really interesting things.
0:27:43 They can exhibit really interesting behavior.
0:27:51 But whatever kind of internal, the equivalent of internal cognition or internal self-reflection that they have, if it exists at all, is totally different from what we do.
0:27:53 It doesn’t have the self-awareness.
0:28:06 Right. But what that means is that when we observe seemingly interesting or intelligent behavior out of these systems, we can’t necessarily infer other things about them because their model of the world and the way they think is so different from us.
0:28:11 So would you need two different models to do the visual one and the architectural generation, you think?
0:28:16 Eventually, like, there’s not anything fundamental about the approach that you’ve taken on the model building.
0:28:21 It’s more about scaling the model and the capabilities of it.
0:28:36 Or, like, is there something about being very visual that prohibits you from actually learning the physics behind this, so to speak, so that you could trust it to generate a CAD design that then is actually going to work in the real world?
0:28:40 I think this is a matter of scaling data and bettering model.
0:28:44 I don’t think there’s anything fundamental that separates these two.
0:28:45 Yeah, I would like it to be one model.
0:28:52 But I think, like, the big problem in deep learning in some sense is how do you get emergent capabilities beyond your training data?
0:28:58 Are you going to get something that understands the forces while it wasn’t trained to predict the forces, but it’s going to learn them implicitly internally?
0:29:03 And I think a lot of what we’ve seen in other large models is that a lot of this emergent behavior does happen at scale.
0:29:07 And will that transfer to other modalities and other use cases and other tasks?
0:29:08 I hope so.
0:29:10 But that’ll be a process that we need to play out over time and see.
0:29:21 Is there a temptation to rely on physics engines that already exist out there that are, you know, basically the gaming industry has saved you a lot of this work?
0:29:24 Or do we have to reinvent things for some fundamental mismatch?
0:29:28 I think it’s sort of like climbing the ladder of technology, right?
0:29:34 Like, in some sense, the reason that you want to build these things at all is because maybe traditional physics engines don’t work in some situations.
0:29:39 If a physics engine was perfect, we would have sort of no need to build models because the problem would have already been solved.
0:29:45 So in some sense, the reason why we want to do this is because classical physics engines don’t solve problems in the generality that we want.
0:29:49 But that doesn’t mean we need to throw them away and start everything from scratch, right?
0:29:53 We can use traditional physics engines to generate data that we then train our models on.
0:29:57 And then you’re sort of distilling the physics engine into the weights of the neural network that you’re training.
0:30:05 I think that’s a lot of what, if you compare the work of other labs, people are speculating that, you know, Sora had a little bit of that.
0:30:07 Genie 3 had a bit of that.
0:30:11 And Genie 3 is explicitly like a video game.
0:30:13 Like you have controls to walk around in.
0:30:20 And I always think like, it’s really funny how the things that we invent for fun actually does eventually make it into serious work.
0:30:21 Yeah.
0:30:26 The whole AI revolution started by graphics chips.
0:30:26 Yeah.
0:30:26 Partially.
0:30:34 Misusing the GPU for generating a lot of triangles into generating a lot of everything else, basically.
0:30:34 Yeah.
0:30:36 We touched on Marble a little bit.
0:30:41 I think you guys chose Marble as I kind of feel like you’re sort of a little bit coming out of stealth moment, if you can call it that.
0:30:41 Yeah.
0:30:46 Maybe we can get a concise explanation from you on what people should take away.
0:30:57 Because everyone here can try Marble, but I don’t think they might be able to link it to the differences between what your vision is versus other, I guess, generative worlds they may have seen from other labs.
0:31:01 So Marble is a glimpse into our model, right?
0:31:04 We are a model of spatial intelligence model company.
0:31:07 We believe spatial intelligence is the next frontier.
0:31:31 In order to make spatially intelligent models, the model has to be very powerful in terms of its ability to, you know, understand, reason, generate in very multimodal fashion of worlds, as well as allow the level of interactivity that we eventually hope to be as, you know, complex as how humans can interact with the world.
0:31:37 So that’s the grand vision of spatial intelligence, as well as the kind of world models we see.
0:31:41 Marble is the first glimpse into that.
0:31:43 It’s the first part of that journey.
0:31:53 It’s the first in-class model in the world that generates 3D worlds in this level of fidelity that is in the hands of the public.
0:31:55 It’s the starting point, right?
0:31:58 We actually wrote this tech blog.
0:32:00 Justin spent a lot of time writing that tech blog.
0:32:03 I don’t know if you had time to browse it.
0:32:17 I mean, Justin really broke it down into what are the inputs we can, multimodal inputs of Marble, what are the kind of editability, which is, you know, allows user to be interactive with the model.
0:32:20 And what are the kind of outputs we can have?
0:32:26 Yeah, so Marble, like, basically, one way of looking at it, it’s the system, it’s a generative model of 3D worlds, right?
0:32:32 So you can input things like text or image or multiple images, and it will generate for you a 3D world that kind of matches those inputs.
0:32:37 And it’s also interactive in the sense that you can interactively edit scenes.
0:32:43 Like, I could generate this scene and then say, I don’t like the water bottle, make it blue instead, like, take out the table, like, change these microphones around.
0:32:48 And then you can generate new worlds based on these interactive edits and export in a variety of formats.
0:32:54 And with Marble, we were actually trying to do sort of two things simultaneously, and I think we managed to pull off the balance pretty well.
0:32:59 One is actually build a model that goes towards the grand vision of spatial intelligence.
0:33:08 And models need to be able to understand lots of different kinds of inputs, need to be able to model worlds in a lot of situations, need to be able to model counterfactuals of how they could change over time.
0:33:11 So we wanted to start to build models that have these capabilities.
0:33:14 And Marble today does already have hints of all of these.
0:33:16 But at the same time, we’re a company, we’re a business.
0:33:24 We were really trying not to have this be a science project, but also build a product that would be useful to people in the real world today.
0:33:35 So while Marble is simultaneously a world model that is building towards this vision of spatial intelligence, it was also very intentionally designed to be a thing that people could find useful today.
0:33:45 And we’re starting to see emerging use cases in gaming, in VFX, in film, where I think there’s a lot of really interesting stuff that Marble can do today as a product.
0:33:49 And then also set a foundation for the grand world models that we want to build going into the future.
0:33:53 Yeah, I noticed one tool that was very interesting was you can record your scene inside.
0:33:54 Yes.
0:33:55 It’s very important.
0:34:02 The ability to record means a very precise control of camera placement.
0:34:09 In order to have precise camera placement, it means you have to have a sense of 3D space.
0:34:13 Otherwise, you don’t know how to orient your camera, right, and how to move your camera.
0:34:17 So that is a natural consequence of this kind of model.
0:34:21 And this is why this is just one of the examples.
0:34:32 Yeah, I find when I play with video generative models, I’m having to learn the language of being a director because I have to move them out, like pan, you know, like dolly out.
0:34:36 Sure, you cannot say pan 63 degrees to the north, right?
0:34:38 You just don’t have that control.
0:34:43 Whereas in Marble, you have precise control in terms of placing a camera.
0:34:47 Yeah, I think that’s one of the first things people need to understand.
0:34:52 It’s like it’s not, you’re not generating frame by frame, which is like what a lot of the other models are.
0:34:56 What are, you know, people understand that an LLM generates one token.
0:34:58 What are like the atomic units?
0:35:03 There’s kind of like, you know, the meshes, there’s like the splats, the voxels, there’s a lot of pieces in a 3D world.
0:35:07 What should be the mental model that people have of like your generations?
0:35:11 Yeah, I think there’s like what exists today and what could exist in the future.
0:35:14 So what exists today is the model natively output splats.
0:35:21 So Gaussian splats are these like, you know, each one is a tiny, tiny particle that’s semi-transparent, has a position orientation in 3D space.
0:35:24 And the scene is built up from a large number of these Gaussian splats.
0:35:28 And Gaussian splats are really cool because you can render them in real time really efficiently.
0:35:30 So you can render on your iPhone, render everything.
0:35:38 And that’s how we get that sort of precise camera control because the splats can be rendered real time on just pretty much any client-side device that we want.
0:35:43 So for a lot of the scenes that we’re generating today, that kind of atomic unit is that individual splat.
0:35:44 But I don’t think that’s fundamental.
0:35:47 I could imagine other approaches in the future that would be interesting.
0:35:56 So there, like there are other approaches that even we’ve worked on at World Labs, like our recent RTFM model, that does generate frames one at a time.
0:36:00 And there the atomic unit is generating frames one at a time as the user interacts with the system.
0:36:08 Or you could imagine other architectures in the future where the atomic unit is a token, where that token now represents, you know, some chunk of the 3D world.
0:36:12 And I think there’s a lot of different architectures that we can experiment with here over time.
0:36:14 I do want to press on, double click on this a little bit.
0:36:18 My version of what Alessio was going to say was like, what is the fundamental data structure of a world model?
0:36:23 Because exactly like you said, like it’s either a Gaussian splat or it’s like the frame or what have you.
0:36:33 You also, in the previous statements, focus a lot on the physics and the forces, which is something over time, which is loosely.
0:36:34 I don’t see that in Marble.
0:36:35 I presume it’s not there yet.
0:36:39 Maybe if there was like a Marble 2, you would have movement.
0:36:43 Or is there a modification to Gaussian splats that make sense?
0:36:45 Or would it be something completely different?
0:36:47 Yeah, I think there’s a couple of modifications that make sense.
0:36:52 And there’s actually a lot of interesting ways to integrate things here, which is another nice place of working in this space.
0:36:54 Then there’s actually been a lot of research work on this.
0:36:59 Like when you talk about wacky ideas, like there’s actually been a lot of really interesting academic work on different ways to imbue physics.
0:37:02 We can also do wacky ideas in those things.
0:37:04 All right.
0:37:06 But then it’s like Gaussian splats are themselves little particles.
0:37:11 There’s been a lot of approaches where you basically attach physical properties to those splats.
0:37:16 And say that each one has a mass or like maybe you treat each one as being coupled with some kind of virtual spring to nearby neighbors.
0:37:19 And now you can start to do sort of physics simulation on top of splats.
0:37:34 So one kind of avenue for adding physics or dynamics or interaction to these things would be to, you know, predict physical properties associated with each of your splat particles and then simulate those downstream, either using classical physics or something learned.
0:37:40 Or, you know, the kind of the beauty of working in 3D is things compose and you can inject logic in different places.
0:37:48 So one way is sort of like we’re generating a 3D scene, we’re going to predict 3D properties of everything in the scene, then we use a classical physics engine to simulate the interaction.
0:37:57 Or you could do something where like as a result of a user action, the model is now going to regenerate the entire scene in splats or some other representation.
0:38:04 And that could potentially be a lot more general because then you’re not bound to whatever sort of, you know, physical properties you know how to model already.
0:38:10 But that’s also a lot more computationally demanding because then you need to regenerate the whole scene in response to user actions.
0:38:18 But I think this was a really interesting area for future work and for adding on to potential marble too, as you say.
0:38:21 Yeah, there’s opportunity for dynamics, right?
0:38:25 What’s the state of like splats density, I guess?
0:38:29 Like, do we, can we render enough to have very high resolution when we zoom in?
0:38:33 Are we limited by like the amount that you can generate, the amount that we can render?
0:38:36 Like, how are these going to get super high fidelity, so to speak?
0:38:40 You have some limitations, but depending on your target use case.
0:38:45 So like one of the big constraints that we have on our scenes is we wanted things to render cleanly on mobile.
0:38:48 And we wanted things to render cleanly in VR headsets.
0:38:53 So those are, those devices have a lot less compute than you’re used, than you have in a lot of other situations.
0:39:01 And like, if you want to get a splat file to render at high resolution, high, like 30 to 60 FPS on like an iPhone from four years ago,
0:39:04 then you are a bit limited in like the number of splats that you can handle.
0:39:11 But if you’re allowed to like work on a recent, like even this year’s iPhone, or like a recent MacBook, or even if you have a local GPU,
0:39:19 or if you don’t need, if you don’t need that 60 FPS, 1080p, like, then you can relax the constraints and get away with more splats.
0:39:21 And that lets you get higher resolution in your scenes.
0:39:26 One use case I was expecting, but didn’t hear from you was embodied use cases.
0:39:29 Are you, you’re just focusing on virtual for now?
0:39:34 If you go to WordLab’s homepage, there is a particular page called MarbleLabs.
0:39:38 There we showcase different use cases.
0:39:47 And we actually organize them in more visual effect use cases or gaming use cases, as well as simulation use cases.
0:39:53 And in that, we actually show this is a technology that can help a lot in robotic training, right?
0:40:02 This goes back to what I was talking about earlier in speaking of data starvation, robotic training really lack data.
0:40:10 You know, high fidelity, real world data is absolutely very critical, but you’re just not going to get a ton of that.
0:40:21 Of course, the other extreme is just purely internet video data, but then you lack a lot of the controllability that you want to train your embodied agents with.
0:40:26 So simulation and synthetic data is actually a very important middle ground for that.
0:40:34 I’ve been working in this space for many years, and one of the biggest pain point is where do you get the synthetic simulated data?
0:40:40 You have to curate assets and build these, compose these complex situations.
0:40:44 And in robotics, you want a lot of different states.
0:40:50 You want the embodied agent to interact in the synthetic environment.
0:41:00 And Marble actually is a really potential for helping to generate these synthetic simulated worlds for embodied agent training.
0:41:03 Obviously, that’s on the homepage.
0:41:04 It’ll be there.
0:41:09 I just, I was like trying to make the link to, as you said, like you also have to build like a business model.
0:41:11 The market for robotics, obviously, is very huge.
0:41:18 Maybe you don’t need that, or maybe we need to build up and solve the virtual worlds first before we go to embodied.
0:41:21 That is actually, that is to be decided.
0:41:23 I do think that.
0:41:27 Because everyone else is going straight there, right?
0:41:32 Not everyone else, but there is an excitement, I would say.
0:41:37 But, you know, I think the world is big enough to have different approaches.
0:41:44 Yeah, I mean, and we always view this as a pretty horizontal technology that should be able to touch a lot of different industries over time.
0:41:48 And, you know, Marble is a little bit more focused on creative industries for now.
0:41:52 But I think the technology that powers it should be applicable to a lot of different things over time.
0:41:55 And robotics is one that, you know, is maybe going to happen sooner than later.
0:41:59 So design, right, is very adjacent to creative.
0:42:00 Oh, yeah, definitely.
0:42:02 Like, I think it’s like the architecture stuff?
0:42:02 Yes.
0:42:02 Okay.
0:42:04 Yeah, I mean, I was joking online.
0:42:09 I posted this video on Slack of like, oh, who wants to use Marble to plan your next kitchen remodel?
0:42:11 It actually works great for this already.
0:42:19 Just like take two images of your kitchen, like reconstruct it in Marble, and then use the editing features to see what would that space look like if you change the countertops or change the floors or change the cabinets.
0:42:24 And this is something that’s, you know, we didn’t necessarily build anything specific for this use case.
0:42:30 But because it’s a powerful horizontal technology, you kind of get these emergent use cases that just fall out of the model.
0:42:40 We have early beta users using a API key that is already building for interior design use case.
0:42:41 I just did my garage.
0:42:42 I should have known about this.
0:42:43 I know.
0:42:47 Next time you remodel, we can be of help.
0:42:49 Well, kitchen is next, I’m sure.
0:42:50 Yeah.
0:42:53 Yeah, I’m curious about the whole spatial intelligence space.
0:42:55 I think we should dig more into that.
0:42:57 One, how do you define it?
0:43:07 And like, what are like the gaps between traditional intelligence that people might think about LLMS when, you know, Dario says we have a data center full of Einstein’s.
0:43:09 That’s like traditional intelligence.
0:43:10 It’s not spatial intelligence.
0:43:14 What is required to be spatial intelligent?
0:43:19 First of all, I don’t understand that sentence, a data center full of Einstein’s.
0:43:23 I just don’t understand that.
0:43:24 It’s not a deep one.
0:43:24 It’s an analogy.
0:43:25 It’s an analogy.
0:43:33 Well, so a lot of AI as a field, as a discipline is inspired by human intelligence, right?
0:43:37 Because we are the most intelligent animal we know in the universe for now.
0:43:44 And if you look at human intelligence, it’s very multi-intelligent, right?
0:43:54 There is a psychologist, I think his name is Howard Gardner, in the 1960s, actually literally called multiple intelligence to describe human intelligence.
0:44:02 And there is linguistic intelligence, there is spatial intelligence, there is logical intelligence, and emotional intelligence.
0:44:10 So for me, when I think about spatial intelligence, I see it as complementary to language intelligence.
0:44:18 So I personally would not say it’s spatial versus traditional, because I don’t know tradition means, what does that mean?
0:44:22 I do think spatial is complementary to linguistic.
0:44:26 And how do we define spatial intelligence?
0:44:36 It’s the capability that allows you to reason, understand, move, and interact in space.
0:44:42 And I use this example of the deduction of DNA structure, right?
0:44:58 And of course, I’m simplifying this story, but a lot of that had to do with the spatial reasoning of the molecules and the chemical bonds in the 3D space to eventually conjecture a double helix.
0:45:11 And that ability that humans, or Francis Crick and Watson had done, it is very, very hard to reduce that process into pure language.
0:45:33 But every day, right, I’m here trying to grasp a mug, this whole process of seeing the mug, seeing the context where it is, seeing my own hand, opening of my hand that geometrically would match the mug, and touching the right affordance points.
0:45:36 All this is deeply, deeply, deeply spatial.
0:45:37 It’s very hard.
0:45:40 I’m trying to use language to narrate it.
0:45:47 But on the other hand, that narrated language itself cannot get you to pick up a mug.
0:45:48 Yeah, bandwidth constraint.
0:45:49 Yes.
0:45:55 I did some math recently on, like, if you just spoke all day, every day for 24 hours a day, how many tokens do you generate?
0:46:03 At the average speaking rate of, like, 150 words per minute, it roughly rolls up to about 215,000 tokens per day.
0:46:08 And, like, your world that you live in is so much higher bandwidth than that.
0:46:23 Well, I think that is true, but if I think about Sir Isaac Newton, right, it’s, like, you have things like gravity at the time that have not been formalized in language that people inherently spatially understand, that things fall, right?
0:46:37 But then it’s helpful to formalize that in some way, or, like, you know, all these different rules that we use language to, like, really capture something that empirically and spatially you can also understand, but it’s easier to, like, describe in a way.
0:46:49 So I’m curious, like, the interplay of, like, spatial and, like, linguistic intelligence, which is, like, okay, you need to understand some rules are easier to write in language for then the spatial intelligence to understand.
0:46:54 But you cannot, you know, you cannot write, put your hand like this and put it down this amount.
0:46:58 So I’m always curious about how you leverage each other together.
0:47:05 I mean, if anything, like the example of Newton, like, Newton only thinks to write down those laws because he’s had a lot of embodied experience in the world.
0:47:06 Right, yeah, exactly.
0:47:14 And actually, it’s useful to distinguish between the theory building that you’re mentioning versus, like, the embodied, like, the daily experience of being embedded in the three-dimensional world, right?
0:47:23 So to me, spatial intelligence is sort of encapsulating that embodied experience of being there in 3D space, moving through it, seeing it, actioning it.
0:47:27 And as Fei-Fei said, you can narrate those things, but it’s a very lossy channel.
0:47:34 It’s just, like, the notion of, you know, being in the world and doing things in it is a very different modality from trying to describe it.
0:47:42 But because we as humans are animals who have evolved interacting in space all the time, like, we don’t even think that that’s a hard thing, right?
0:47:49 And then we sort of naturally leap to language and then theory building as mechanisms to abstract above that sort of native spatial understanding.
0:47:56 And in some sense, LLMs have just, like, jumped all the way to those highest forms of abstracted reasoning, which is very interesting and very useful.
0:48:05 But spatial intelligence is almost like opening up that black box again and saying maybe we’ve lost something by going straight to that fully abstracted form of language and reasoning and communication.
0:48:08 You know, it’s funny as a vision scientist, right?
0:48:13 I always find that vision is underappreciated because it’s effortless for humans.
0:48:15 You open your eyes.
0:48:17 As a baby, you start to see your world.
0:48:19 We’re somehow born with it.
0:48:21 We’re almost born with it.
0:48:28 But you have to put effort in learning language, including learning how to write, how to do grammar, how to express.
0:48:41 And that makes it feel hard, whereas something that nature spends way more time actually optimizing, which is perception and spatial intelligence, is underappreciated by humans.
0:48:43 Is there proof that we are born with it?
0:48:44 You said almost born.
0:48:48 So it sounds like we actually do learn after we’re born.
0:48:55 When we are born, our visual acuity is less and our perceptual ability does increase.
0:49:00 But we are, most humans are born with the ability to see.
0:49:07 And most humans are born with the ability to link perception with motor movements, right?
0:49:11 I mean, the motor movement itself takes a while to refine.
0:49:15 But, and then animals are incredible, right?
0:49:18 Like, I was just in Africa earlier this summer.
0:49:21 These little animals, they’re born, and within minutes they have to get going.
0:49:24 And otherwise, you know, the lions will get them.
0:49:34 And in nature, you know, it took 540 million years to optimize perception and spatial intelligence and language.
0:49:40 The most generous estimation of language development is probably half a million years.
0:49:41 Wow.
0:49:43 That’s longer than I would have got to say.
0:49:45 I’m being very generous.
0:49:45 Yeah.
0:49:45 Yeah.
0:49:51 Yeah, no, I was, you know, sort of going through your book and I was realizing that one of the
0:49:56 interesting links to something that we covered on the podcast is language model benchmarks.
0:50:01 And how Winogrand actually put in all these, like, sort of physical impossibilities that
0:50:03 require spatial intelligence, right?
0:50:08 Like, A is on top of B, therefore A cannot fall through B, is obvious to us.
0:50:09 But to a language model, it could happen.
0:50:10 I don’t know.
0:50:13 Maybe it’s like a part of the, you know, the next token prediction.
0:50:16 And that’s sort of what I mean about, like, unwrapping this abstraction, right?
0:50:20 Like, if your whole model of the world is just, like, seeing sequences of words after
0:50:23 each other, it’s really kind of hard to, like, why not?
0:50:24 It’s actually unfair.
0:50:24 Right.
0:50:28 But then the reason it’s obvious to us is because we are internally mapping it back to some
0:50:30 three-dimensional representation of the world that we’re familiar with.
0:50:34 That’s the question is, I guess, like, how hard is it, you know, how long is it going
0:50:39 to take us to distill from, like, I use the word distill, I don’t know if you agree with
0:50:42 that, to distill from your world models into a language model?
0:50:45 Because we do want our models to have spatial intelligence, right?
0:50:49 Like, and do we have to throw the language model out completely in order to do that?
0:50:50 Or?
0:50:50 No.
0:50:51 No, right?
0:50:51 Yeah, I don’t think so.
0:50:53 I think they’re multimodal.
0:50:56 I mean, even our model, Marble today, takes language as an input.
0:50:56 Right.
0:50:57 Right.
0:51:00 So, it’s deeply multimodal.
0:51:03 And I think in many use cases, these models will work together.
0:51:06 Maybe one day we’ll have a universal model.
0:51:10 I mean, even if you do, like, there’s sort of a pragmatic thing where people use language
0:51:13 and people want to interact with systems using language.
0:51:17 Even pragmatically, it’s useful to build systems and build products and build models that
0:51:18 let people talk to them.
0:51:19 So, I don’t see that going away.
0:51:24 I think there’s a sort of intellectual curiosity of saying how, like, intellectually, how much
0:51:29 could you build a model that only uses vision or only uses spatial intelligence?
0:51:33 I don’t know that that would be practically useful, but I think it’d be an interesting intellectual
0:51:35 or academic exercise to see how far you could push that.
0:51:41 I think, I mean, not to bring it back to physics, but I’m curious, like, if you had a highly precise
0:51:46 world model and you didn’t give it any notion of, like, our current understanding of the standard
0:51:50 model of physics, how much of it it would be able to come up with?
0:51:51 And like, recreate from scratch?
0:51:56 And what level of, like, language understanding it would need?
0:51:59 Because we have so many notations that, like, we kind of use that, like, we created, but
0:52:02 like, maybe we’ll come up with a very different model of it and still be accurate.
0:52:08 And I wonder how much we’re kind of limited, but I, you know, how people say humanoids need
0:52:10 to be like humans because the world is built for humans.
0:52:16 And in a way, it’s like the way we build language constrains some of the outputs that we can get
0:52:17 from these other modalities as well.
0:52:20 So I’m super excited to follow your work.
0:52:20 Yeah.
0:52:24 I mean, like, there’s another, I mean, you actually don’t even need to be doing AI to answer that
0:52:24 question.
0:52:26 You could discover aliens and see what kind of physics they have.
0:52:26 Right.
0:52:27 Right.
0:52:28 And they might have a totally different.
0:52:32 Well, Fei-Fei said, we are so far the smartest animal in the universe.
0:52:32 Right.
0:52:35 So, so what do you, so, I mean, but that is a really interesting question, right?
0:52:40 Like, is our knowledge of the universe and our understanding of physics, is it constrained in some way by our
0:52:43 own cognition or by the path dependence of our own technological evolution?
0:52:48 And one way to sort of, and like do an experiment, like you almost want to do an experiment and say,
0:52:52 like, if we were to rerun human civilization again, would we come up with the same physics in the same order?
0:52:55 And I don’t think that’s a very practical, practical experiment.
0:53:05 You know, one experiment I wonder if people could run is that we have plenty of astrophysical data now on the planet or celestial
0:53:11 celestial body movements, just feed the data into a model and see if Newtonian law emerges.
0:53:14 My guess is it probably, probably won’t.
0:53:16 That’s my guess.
0:53:17 It’s not.
0:53:25 The abstraction level of Newtonian law is at a different level from what these language LLMs represents.
0:53:25 Yeah.
0:53:34 So, I wouldn’t be surprised that given enough celestial movement data, an LLM would actually predict pretty accurate
0:53:36 movement trajectories.
0:53:43 Let’s say I invent a planet surrounding a star and give you enough data.
0:53:48 My model would tell you, you know, on day one where it is, day two where it is.
0:53:49 I wouldn’t be surprised.
0:53:58 But F equals MA or, you know, action equals reaction, that’s just a whole different abstraction level.
0:54:01 That’s beyond just today’s LLM.
0:54:06 Okay, what world model would you need to not have it be a geocentric model?
0:54:12 Because if I’m training just on visual data, it makes sense that you think the sun rotates around the earth, right?
0:54:14 But obviously, that’s not the case.
0:54:17 How would it learn that?
0:54:20 Like, I’m curious about all these, like, you know, forces that we talk about.
0:54:24 It’s like, sometimes maybe you don’t need them because as long as it looks right, it’s right.
0:54:32 But, like, as you make the jump to, like, trying to use these models to do more high-level tasks, how much can we rely on them?
0:54:35 I think you can need kind of a different learning paradigm, right?
0:54:45 So, like, you know, there’s a bit of conflation here happening where saying, is it LLMs and language and symbols versus, you know, human theory building and human physics?
0:54:51 And they’re very different because an LLM, like, the human objective function is to understand the world and thrive in your life.
0:54:59 And the way that you do that is by, you know, sometimes you observe data and then you think about it and then you try to do something in the world and it doesn’t match your expectations.
0:55:04 And then you want to go and update sort of your understanding of the world online.
0:55:06 And people do this all the time, constantly.
0:55:11 Like, whether it’s, you know, I think my keys are downstairs, so I go downstairs and I look for them and I don’t see them.
0:55:13 And, oh, no, they’re actually up in my bedroom.
0:55:23 So, we’re, like, because we’re constantly interacting with the world, we’re constantly having to build theories about what’s happening in the world around us and then falsify or add evidence to those theories.
0:55:29 And I think that that kind of process writ large and scaled up is what gives us F equals MA in Newtonian physics.
0:55:35 And I think that’s a little orthogonal to, you know, the modality of model that we’re training, whether it’s language or spatially.
0:55:45 The way I put it is almost like this is almost more efficient learning because you have a hypothesis of here are the different possible worlds that are granted by my available data.
0:55:50 And then you do experiments to eliminate the worlds that are not possible and you resolve to the one that’s right.
0:55:57 To me, that’s also how I also have theory of mind, which is, like, I have a few thesis of what you’re thinking, what you’re thinking.
0:56:05 And I try to create actions to resolve that or check my intuition as to what you’re thinking, you know.
0:56:08 And obviously, all of us don’t do any of these.
0:56:18 A theory of mind possibly also will break into even emotional intelligence, which today’s AI is really not touching at all, right?
0:56:20 And we really, really need it.
0:56:24 You know, people are starting to depend on these things probably too much.
0:56:28 And that’s a whole topic of other debate.
0:56:31 I do have to ask because a lot of people have sent this to us.
0:56:32 How much do we have to get rid of?
0:56:35 You know, is sequence-to-sequence modeling out the window?
0:56:37 Is attention out the window?
0:56:39 Like, how much are we re-questioning everything?
0:56:42 I think you stick with stuff that works, right?
0:56:44 So attention is still there.
0:56:46 I think attention is still there.
0:56:47 I think there’s a lot.
0:56:49 Like, you don’t need to fix things that aren’t broken.
0:56:54 And, like, there’s a lot of hard problems in the world to solve, but let’s focus on one at a time.
0:57:01 I think it is pretty interesting to think about new architectures or new paradigms or drastically different ways to learn.
0:57:05 But you don’t need to throw away everything just because you’re working on new modalities.
0:57:16 I think sequence-to-sequence is actually, in world models, I think we are going to see algorithm or architecture beyond sequence-to-sequence.
0:57:20 Oh, but here, actually, I think there’s a little bit of, you know, technological confusion.
0:57:22 And transformers already solved that for us, right?
0:57:24 Like, transformers are actually not a model of sequences.
0:57:27 A transformer is natively a model of sets.
0:57:28 And that’s very powerful.
0:57:33 But because a lot of the transformers grew out of earlier architectures based around recurrent neural networks,
0:57:39 and RNNs definitely do have, like, a built-in architectural, like, they do model one-dimensional sequences.
0:57:40 Okay.
0:57:45 But transformers are just models of sets, and they can model a lot of, like, those sets could be, you know, 1D sequences.
0:57:46 They could be other things as well.
0:57:48 Do you literally mean set theory?
0:57:49 Like, yeah, yeah.
0:57:51 So, yeah, yeah, yeah.
0:57:54 So, a transformer is actually not a model of a sequence of tokens.
0:57:57 A transformer is actually a model of a set of tokens, right?
0:58:02 The only thing that gives, that injects the order into it, in the standard transformer architecture,
0:58:06 the only thing that differentiates the order of the things is the positional embedding that you give the tokens, right?
0:58:14 So, if you choose to give a sort of 1D positional embedding, that’s the only mechanism that the model has to know that it’s a 1D sequence.
0:58:19 But all the, like, operators that happen inside a transformer block are either token-wise, right?
0:58:23 So, they either, you have an FFN, you have QKV projections, like, you have per-token normalization.
0:58:25 All of those happen independently per token.
0:58:28 And then you have interactions between tokens through the attention mechanism.
0:58:31 But that’s also sort of, it’s permutation equivariant.
0:58:36 So, if I permute my tokens, then the attention operator gets a permuted output in exactly the same way.
0:58:39 So, it’s actually natively an architecture of sets of tokens.
0:58:41 Literally, a transform.
0:58:41 Yeah.
0:58:43 In the math term.
0:58:48 I know we’re out of time, but we just want to give you the floor for some call to action,
0:58:53 either on people that would enjoy working at WorldLabs, what kind of people should apply,
0:58:57 what research people should be doing outside of WorldLabs that would be helpful to you,
0:58:58 or anything else on your mind?
0:59:05 I do think it’s a very exciting time to be looking beyond just language models
0:59:10 and think about the boundless possibilities of spatial intelligence.
0:59:15 So, we are actually hungry for talent, ranging from very deep researchers, right,
0:59:22 thinking about the problems like Justin just described, you know, training large models of world models.
0:59:28 We are hungry for engineers, good engineers building systems, you know,
0:59:31 from training optimization to inference to product.
0:59:40 And we’re also hungry for good business, you know, product thinkers and go-to-market and,
0:59:41 you know, business talents.
0:59:44 So, we are hungry for talent.
0:59:50 We, especially now that we have exposed the model to the world through marble,
0:59:55 I think we have a great opportunity to work with even a bigger pool of talent
1:00:02 to solve both the model problem as well as deliver the best product to the world.
1:00:06 Yeah, I think I’m also excited for people to try marble and do a lot of cool stuff with it.
1:00:10 I think it has a lot of really cool capabilities, a lot of really cool features that fit together really nicely.
1:00:15 In the car coming here, Justin and I were saying people have not totally discovered that,
1:00:22 okay, it’s only 24 hours, have not totally discovered some of the advanced mode of editing, right?
1:00:24 Like, turn on the advanced mode.
1:00:31 You can, like Justin said, change this color of the bottle, you know, change your floor and change the trees.
1:00:36 Well, I actually tried to get there, but when it says create, it just makes me create a completely different world.
1:00:43 You need to click on the advanced mode, we can improve on our UI, UX, but remember to click on it.
1:00:45 Yeah, we need to hire people who work on the product.
1:00:50 But one thing we got that was clear from you guys are looking for is also intellectual fearlessness,
1:00:54 which is something that I think you guys hold as a principle.
1:01:02 Yeah, I mean, we are literally the first people who are trying this, both on the model side as well as on the product side.
1:01:04 Thank you so much for joining us.
1:01:05 Thank you, guys.
1:01:06 Thanks for having us.
1:01:11 Thanks for listening to the A16Z podcast.
1:01:17 If you enjoyed the episode, let us know by leaving a review at ratethispodcast.com slash A16Z.
1:01:19 We’ve got more great conversations coming your way.
1:01:20 See you next time.
1:01:28 This information is for educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product.
1:01:36 This podcast has been produced by a third party and may include paid promotional advertisements, other company references, and individuals unaffiliated with A16Z.
1:01:43 Such advertisements, companies, and individuals are not endorsed by AH Capital Management LLC, A16Z, or any of its affiliates.
1:01:49 Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee its accuracy.
1:01:50 Thank you.
1:01:57 For more information, visit www.fema.com slash A16Z.

Fei-Fei Li is a Stanford professor, co-director of Stanford Institute for Human-Centered Artificial Intelligence, and co-founder of World Labs. She created ImageNet, the dataset that sparked the deep learning revolution. 

Justin Johnson is her former PhD student, ex-professor at Michigan, ex-Meta researcher, and now co-founder of World Labs.

Together, they just launched Marble—the first model that generates explorable 3D worlds from text or images.

In this episode Fei-Fei and Justin explore why spatial intelligence is fundamentally different from language, what’s missing from current world models (hint: physics), and the architectural insight that transformers are actually set models, not sequence models.

 

Resources:

Follow Fei-Fei on X: https://x.com/drfeifei

Follow Justin on X: https://x.com/jcjohnss

Follow Shawn on X: https://x.com/swyx

Follow Alessio on X: https://x.com/fanahova

 

Stay Updated:

If you enjoyed this episode, please be sure to like, subscribe, and share with your friends.

Follow a16z on X: https://x.com/a16z

Follow a16z on LinkedIn:https://www.linkedin.com/company/a16z

Follow the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX

Follow the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see http://a16z.com/disclosures.

Stay Updated:

Find a16z on X

Find a16z on LinkedIn

Listen to the a16z Podcast on Spotify

Listen to the a16z Podcast on Apple Podcasts

Follow our host: https://twitter.com/eriktorenberg

 

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Leave a Reply

a16z Podcasta16z Podcast
Let's Evolve Together
Logo