AI transcript
0:00:11 you look at it and like there’s a world that’s generated in front of your eyes and it’s amazing that it’s happening.
0:00:14 I was very excited about how far can we push that.
0:00:20 And it’s at the point where like a human who is not an expert will watch it and think it looks real, right?
0:00:22 And I think that’s pretty incredible.
0:00:29 Genie 3 from Google DeepMind can create fully interactive, persistent worlds in real time from just a few words.
0:00:32 Today, we’re joined by the team behind it.
0:00:40 Shlomi Fuchter and Jack Parker Holder from Google DeepMind, plus Anjane Midha, Marco Mascoro, and Justine Moore from A16Z.
0:00:48 We’ll talk about how it works, the special memory that keeps worlds consistent, the surprising behaviors we’ve learned, and where world models are headed next.
0:00:50 Let’s get into it.
0:00:55 Jack, Shlomi, Genie 3 has taken over the internet.
0:00:57 We’re honored to have you on the podcast today.
0:01:01 As the response surprised you, reflect a little bit about the reaction.
0:01:11 We weren’t sure how big it’s going to be, but today I felt definitely that we have something that was for a long time coming, basically being able to generate environments in real time.
0:01:20 I think a lot of work that was done in Google DeepMind and outside pointed to that direction, but we really wanted to make it happen, and I hope we have.
0:01:27 Tim, why don’t we reflect internally a little bit about what we found so game-changing about Genie 3 and why we’re so excited to have this conversation.
0:01:28 Yeah, for sure.
0:01:30 I mean, first of all, it’s an amazing model.
0:01:35 I think there’s a lot of excitement around the special memory, the consistency across all the frames.
0:01:49 I think this is the first time I can see you can have some sort of interactive way of doing this stuff with videos, because it used to be like you would do one problem and you would have 15 seconds of a video, but now you can actually have some sort of interactive kind of element to it, which I think is very exciting.
0:01:52 So can you elaborate a little bit more on your insights on this?
0:02:02 How was, for example, figuring out what data you should collect, how you make it very interactive, and keeping the flow of the whole video, which I thought was phenomenal?
0:02:03 Sure, yeah.
0:02:13 So I think you kind of highlighted a few capabilities, sort of the length of the generation, the consistency of the world, maybe diversity as well of the kind of things you can generate.
0:02:21 I think the main thing is that obviously we made progress in quite a few different fronts, right, in separate efforts, right?
0:02:29 So we had this Genie 2 project that was much more sort of like 3D environments that it could generate, and it wasn’t super high quality.
0:02:39 It felt like it coming from Genie 1, but it wasn’t the same quality as things like VO2, which the State of the Art video model at the time came out in December, roughly exactly the same time it came out a week later than Genie 2.
0:02:44 And then obviously internally, there was a lot of discussion between the two projects about the different directions we were pursuing.
0:02:55 And then Jeremy had also worked on Game & Gen, right, which is the Doom paper, as people know it, which I think you guys also wrote a nice piece on straight after that came out.
0:03:13 And so we felt that across these different projects, we had quite a lot of interesting things that would naturally kind of combine, and we could basically take the most ambitious version of the combined project and see if it was possible.
0:03:18 And fortunately it was, and quite, I think the timeline is probably the bit that surprised many of us.
0:03:26 Because obviously we set ourselves these goals, and like we tried very hard to achieve them, but you never be totally sure how it’s going to actually feel when you’ve got to that point.
0:03:33 I think it ended up being something that resonated with people a lot more than maybe we expected, but we were always believers.
0:03:54 Yeah, I’ll just add to this that I think the real-time component is really important, and not many people experience it firsthand, but we really try it in the release to at least have a few trusted testers interact with it, and also get the feel of it by adding these overlays that show what happens, how people can use the keyboard to control it.
0:03:58 And I think there is something magical about the real-time aspect.
0:04:07 I felt it for the first time when our model, like game engine model, started working fast enough, and we were just like, oh my god, it’s actually, I can actually walk around.
0:04:09 And it was a bit of a wow moment.
0:04:14 And yeah, I think there is something when it responds immediately that is really magical.
0:04:21 I think that’s kind of sparked the imagination of many people when the Doom kind of simulation came out, and here we really wanted to push it to somewhere.
0:04:26 We weren’t sure it was going to work, so it was definitely at the edge of what’s possible, I think.
0:04:27 That’s how we felt.
0:04:30 So we just said, yeah, let’s try and see if we can make it happen.
0:04:41 I think you guys, I don’t know if this was on purpose or not, but you perfectly timed it when everyone on X and Reddit and everywhere was making those videos of characters walking through games.
0:04:50 But they obviously weren’t interactive, they weren’t real-time, and then you guys came out with this release that was like, now this is an actual product, and it blew folks away.
0:04:54 I’m curious, because you can imagine so many different applications for this, right?
0:05:06 Like, more controllable video generation, or making it much easier to create games, even personal gaming, where someone’s just kind of creating their own world they walk through, like RL environments for agents, robotics.
0:05:10 Are there any particular use cases that you’re most excited about?
0:05:17 I think all of the applications basically stem from the ability to generate a world, just from a few words.
0:05:28 And I think, for me, kind of like this potential, when I started looking at video models, I think it was pretty early, when I think it was one of the models were like, imagine video, which was modeled by Google researchers.
0:05:38 But there are a lot of models that were very basic compared to what we have today, but the ability to simulate something, like you look at it, and there’s a world that’s generated in front of your eyes, and it’s amazing that it’s happening.
0:05:43 And I think at this point, I was very excited about how far can we push that, right?
0:05:48 So I think there was one way to do it, and Genie is definitely another way to make it a bit more interactive.
0:05:53 So I think all of the applications basically stem from this core capability.
0:05:56 So it can be entertainment, of course, as you said.
0:05:59 It can be training agents.
0:06:02 It can be helping agents to reason about the world, education.
0:06:08 So I don’t think any particular application is more important than others.
0:06:12 I think it’s really up to how developers in the future will be on top of that.
0:06:16 Yeah, I would get basically the same answer in the end with a different journey to get there, right?
0:06:23 Which is, I personally myself worked in reinforcement learning for a few years before starting the GE project in 2022.
0:06:30 And the motivation originally was like that in RL at the time, we had this problem where we’d say, which environment should we try and solve, right?
0:06:37 Because once you’ve already done Go, which people thought was years or decades away, and then that was solved in 2016.
0:06:40 Well, it’s solved, but we reached Superhuman 11 in 2016.
0:06:46 And then StarCraft, three years later, which is not particularly a long time for something incrementally significant.
0:06:51 So it was 2021 time, it was a big question of what should we try and do with RL.
0:06:57 We know that the algorithms can learn Superhuman capabilities if they have the right environment, but we don’t know what the environment would be.
0:07:00 And so we were working on designing our own ones, right, with colors.
0:07:06 But then instead, it seemed like the more promising path when you had the first text to image models coming out.
0:07:11 Whereas like, what if we just think long term, what’s the way to really unlock unlimited environments?
0:07:18 That being said, over the course of the project, and originally we started it, I guess in 2022, it was very focused on that one application.
0:07:23 But it seems quite clear now that this could have a big impact on all those other areas you mentioned, right?
0:07:27 So I think it’s like language models in 2021, maybe.
0:07:34 You probably wouldn’t have guessed like an IMO gold medal a few years later, but come that fast as a direct application of that technology, right?
0:07:37 It was probably, oh, it can help me with my emails or whatever it was.
0:07:45 And I think it’s really cool to build these kind of new class of foundation models and then see what people can imagine doing with it.
0:07:47 And that’s one of the very exciting things about sharing the research preview, right?
0:07:48 So you’ve got this kind of feedback.
0:07:52 So we’re hoping a lot of these things can happen.
0:07:58 One of the things in the research preview post, Jack, that blew me away was this.
0:08:01 And it wasn’t even your first GIF, I think, in the blog post.
0:08:02 It was either second or third.
0:08:06 You had this visual of somebody painting the wall with the paintbrush.
0:08:15 And then the character moves to a different part of the wall, paints, and then moves back.
0:08:17 And the original paint is still there.
0:08:18 And I didn’t believe it.
0:08:19 I was like, there’s no way.
0:08:22 And then I read, and you’re right, it was described as a special memory.
0:08:25 So the persistence part for me, I’m not taking away from all the other stuff.
0:08:27 The interactivity is amazing.
0:08:32 But I think, broadly speaking, folks expected that at some point, video generation, for example, would become real time.
0:08:36 When I saw the Genie 3 post, it was like, okay, they actually went and did it.
0:08:41 But the special memory, the persistence, was when I kind of sat up in my chair and I was like, how did that happen?
0:08:45 Could you talk a little bit about when did you discover that as an emergent property?
0:08:48 Or was that a specific design goal?
0:08:50 What’s the backstory on that?
0:08:51 Because that feels like a big unlock, Jack.
0:08:52 Why don’t we start with you?
0:08:54 Yeah, so that’s a great question.
0:08:56 I’ll say a few things.
0:09:02 So the TLDR is, it was totally planned for, but still incredibly surprising when it worked that well.
0:09:06 So that specific sample, when I saw it, it was hard to believe.
0:09:08 I actually wasn’t sure that the model generated for a second.
0:09:15 I was like, that took me to watch it a few times and really check and freeze the frames and look back and check that it was the same.
0:09:19 But going back a few steps, so Genie 2 had some memory, right?
0:09:26 So this got kind of lost because, I mean, Genie 2 came at a time when there were lots of announcements, very exciting announcements.
0:09:29 I mean, VO2, only a few days later, it was a busy time of the year.
0:09:33 And the main headline act was that we could generate new worlds at all, right?
0:09:35 So that was the thing that we wanted to emphasize.
0:09:39 But it did have a few seconds of memory.
0:09:44 And we had a couple of examples, like I created a robot near a pyramid, looked away, looked back, and the pyramid’s there.
0:09:46 But it’s like kind of blurry.
0:09:47 It’s not perfect.
0:09:52 But some other models around the same time or more recently didn’t have this feature, right?
0:09:57 So people kind of indexed to that because they didn’t notice the early signs of it in the Genie 2 work.
0:10:04 And then for Genie 3, we basically went much more ambitious on the same sort of approach, right?
0:10:10 And we made it like a headline goal for ourselves is can we make the memory be what it is, right?
0:10:17 We said we want a minute plus memory and real time and this higher resolution all in the same model.
0:10:20 And those are kind of conflicting objectives, right?
0:10:22 So we sell ourselves this kind of technical challenge.
0:10:28 And we said, if we target this, then it’s just about feasible and it’ll be pretty incredible.
0:10:30 And then you still don’t know, obviously, it’s going to pan out.
0:10:38 So then when you get to the end of the research, seven months later, to see the samples, it still is quite mind-blowing, to be honest.
0:10:44 So yeah, it’s kind of planned for, but still pretty cool and exciting when you see it.
0:10:54 One thing that we didn’t want to do and we didn’t want to build an explicit representation, right?
0:10:58 So there are definitely methods that are able to achieve consistency.
0:11:05 And they did that through an explicit, some 3D, you know, nerves, and other methods that basically say,
0:11:13 okay, if we know how the world looks like, we use this kind of like prior assumptions on how the world remains static pretty much,
0:11:16 then we can build representation and then know what you’re looking at.
0:11:22 So that’s great, I think, for some applications, but we didn’t want to go down this path because we felt it’s somewhat limiting.
0:11:26 And I think, so we can definitely say that the model doesn’t do that.
0:11:29 And that generates kind of like frame by frame.
0:11:33 And we think this is really key for the generalization to actually work.
0:11:38 Every time someone interacts with it for the first time and they like test, they look away and then look back,
0:11:39 I’m always like holding my breath.
0:11:41 And then it looks back and it’s the same.
0:11:41 I’m like, whoa.
0:11:44 It’s still really, it’s really cool.
0:11:44 It’s very cool.
0:11:46 And how long is this special memory?
0:11:47 I don’t know if you can talk about it.
0:11:51 You mentioned a minute plus, but is there some sort of like measure that you have?
0:11:55 Is it like, can you keep it for half an hour or what is the limit on that?
0:12:03 There is no like fundamental limitation, but currently the current design will limit it to one minute of this type of memory.
0:12:06 Yeah, it’s also a real-time trade-off for the guests as well.
0:12:14 We felt that because of the breadth and the other capabilities that like a minute was sufficient for this version, like it’s quite a significant leap.
0:12:17 But obviously, eventually you’d want to be successful.
0:12:29 One more question that related on the, between Gini 1 to like, for example, NLMs, like you have Dipsy Car 1, like they saw in this paper, like the longer they keep it running,
0:12:36 they suddenly will see like these interesting behaviors, like the model will start like reasoning or like would give like a, oh, I’m wrong on this.
0:12:37 I should self-correct.
0:12:42 Do you see anything in kind of like this scaling from two to three?
0:12:49 Do you see any sort of like interesting behavior that you were not expecting that suddenly just appear by increasing the amount of data and the amount of compute?
0:12:57 Yeah, I’ll just say, I think there is a bit of like overall, definitely like many generative models, we see that improvements happen with scale.
0:13:02 I think that’s not secret and I don’t think it’s not the same type of intelligence.
0:13:15 I would say like NLMs, I’m not sure if reasoning is the right term, but we do see that some definitely things like it can infer from a few approach like a door and it makes sense for the agents to maybe open it.
0:13:25 So you might see that it’s starting to do that, for example, or there’s some like better world understanding that happens over time and it just like things look better and more realistic.
0:13:28 So I think these are the trends that we’ve still observed.
0:13:29 Yeah.
0:13:34 And from G2 to 3, it’s, I think the real world capabilities really increased, right?
0:13:42 So on the physics side, some of the water simulations, you can see some of the lighting as well, like a really breathtaking.
0:13:47 I think we have this example of the storm on the blog and that one I think is super cool.
0:13:53 And it’s at the point where like a human who is not an expert will watch it and think it looks real, right?
0:13:55 And I think that’s pretty incredible.
0:14:00 Whereas with G2, it was like, it kind of understands roughly what these things should do, but you know, it’s not real, right?
0:14:05 You can look at it and you can clearly see that it’s sort of not really photorealistic.
0:14:08 So I think that’s quite a big leap on the quality in that side.
0:14:18 Yeah, one of the things that was really cool in all the examples was the water is sort of a great way to see, like, does it understand like what the world is and how objects interact?
0:14:22 And that example someone posted of the feet going in the puddle was amazing.
0:14:27 But then there was also that example of like a cartoon character.
0:14:39 It was more of an animated style who was like running across this kind of green patch of land and then ran into this blue kind of wavy thing that looks like water and he started swimming, which I thought was really interesting.
0:14:48 Like, were there particular things you had to do around that for the model to be able to understand, like, how characters should interact in different environments and different styles?
0:15:07 What you’re basically describing is like the real breadth of different kind of environment terrains and worlds and things like that, like water or walking on sand versus going downhill and snow and how the agent’s sort of interactions should differ given the terrain that they’re in.
0:15:11 And I think that that really is a property of scale and breadth of training.
0:15:15 So this is very much like an emergent thing.
0:15:19 I don’t think there’s anything like really specific we do for this, right?
0:15:25 You, again, like, you hope the model has learned this because it should have like a general world knowledge.
0:15:30 It doesn’t always work perfectly, but in general, it’s pretty good.
0:15:33 Like, so for the skiing examples, you do go fast when you go downhill.
0:15:37 And then when you turn, try and go back uphill, it’s very slow, if not at all possible.
0:15:43 When you go into water, obviously, you hope, as you said, that the agent will start swimming and splashing.
0:15:44 And this does typically happen.
0:15:48 When you look down near a puddle, hopefully you’re wearing Wellington boots.
0:15:52 Like, this kind of stuff does just kind of make sense.
0:15:57 And I think it feels pretty magical because it very much aligns with what you were thinking about the world.
0:15:59 And the model is just generated at all.
0:16:04 So, yeah, that’s also one of the really exciting things, for sure.
0:16:11 Yeah, and on top of that, one kind of trade-off that typically we have is that we want the model to do two things.
0:16:15 We want the model to create the world in a way that looks consistent.
0:16:20 So, Jack said, like, if you walk in the rain or in puddles, then probably wearing boots.
0:16:27 But if we provide it with a different description or, like, the prompt is saying something else, we want it to still follow the prompt.
0:16:31 And there is some tension here because some things are very unlikely, right?
0:16:35 You might say, I want to wear flip-flops and jump in the rain or whatever.
0:16:42 Then the model still has to try and create something that is very unlikely.
0:16:47 And that’s where, typically, you know, video models maybe find it more challenging.
0:16:56 And that’s where, you know, our models might find it more challenging, but still successful to a surprising degree to go into this kind of low-probability area.
0:16:59 And I think that’s really, in a way, that’s what we want, right?
0:17:08 Like, many people, they don’t want to just look at the video that looks like their own, you know, maybe this room, but something a bit more exciting.
0:17:17 And that’s where, like, I think this is the magic of the models, that they can take you to places that maybe are not so likely to be in reality.
0:17:20 Like, the text following is really amazing in this model.
0:17:23 And that does feel really magical.
0:17:26 I think this is something that the VO does really well as well, right?
0:17:28 Like, pretty much what you ask for.
0:17:32 It’s really well aligned with text.
0:17:34 And we have that with Genie 3.
0:17:40 So you could describe very specific worlds and really kind of, like, arbitrary, silly things.
0:17:42 And it pretty much works.
0:17:52 Like, we actually had this discussion because people were very disappointed to find out that the video I made of my dog actually was not my dog’s photograph.
0:17:53 I just described her in text.
0:18:00 And, yeah, I don’t know if that’s a big secret, but it looks exactly like her.
0:18:02 And the model just kind of knows, right?
0:18:05 I think that’s pretty amazing.
0:18:11 So I think that that’s actually a really important capability that we didn’t have with Genie 2 as well, right?
0:18:13 Because we relied on image prompting.
0:18:19 And so there was some transfer issue, like, where you rely on imagine to regenerate the image.
0:18:23 And that often does look really good, but it’s not necessarily a good image for starting the world.
0:18:29 Whereas, like, going directly from text, you get the controllability for anything you want.
0:18:35 Plus, it just kind of naturally works because it’s in the, like, correct space for the model to do its thing.
0:18:36 And that’s something really powerful.
0:18:37 And why is that, Jack?
0:18:41 What do you think led to such a massive instruction following a text adherence gain?
0:18:43 Because it’s a pretty hard thing to do.
0:18:46 Well, I mean, our team had never really worked on this.
0:18:49 And so Genie 1 and 2 both worked with image prompting.
0:18:58 And so obviously, like, for this next phase, we leveraged a lot of the research done internally on other projects.
0:19:04 And personnel-wise, I mean, Shlomi’s obviously been co-leading the VO project.
0:19:09 And so we were able to kind of build on a lot of other work and ideas internally.
0:19:14 And that basically, like, may allow us to kind of, like, turbocharge progress, right?
0:19:25 So if we’d done this sort of by incrementally building ourselves on an island, it would have taken, I think, a lot longer than being part of Google DeepMind,
0:19:30 where we have these teams that have a lot of knowledge in different areas and sort of lean and build on.
0:19:40 Which I think is super exciting about our big industry right now is that we have so many experts in different areas that we can, like, seek out advice and help from.
0:19:46 And Shlomi, a question for you on that is, you know, having led the VO3 work, which is kind of mind-blowing.
0:19:52 Is there a reason why this is Genie 3 and not, like, VO3 real-time?
0:19:56 So I think it’s definitely a bit different, right?
0:20:02 Like, Genie allows you to navigate the environment and then maybe take actions, right?
0:20:06 And that’s not something that VO at this point can do.
0:20:10 But there are other aspects that Genie doesn’t have, right?
0:20:12 Genie doesn’t have audio, for example, right?
0:20:19 So we just think it’s, while definitely there are potential similarities, it’s sufficiently different.
0:20:25 Also, another thing is that at this point, Genie 3 is not available, you know, as a product.
0:20:32 And we do think about it as, like, a product that is kind of, like, mainstream and became very popular.
0:20:35 And, you know, what the future holds, I don’t know.
0:20:43 But, I mean, at this point, we just felt it’s sufficiently different in terms of what capabilities and how, like, we think about this.
0:20:46 So Genie 3 is pretty much a research preview, right?
0:20:48 It’s not something we are releasing at this point.
0:20:54 You know, something we think about a lot is what are the edges of a modality?
0:21:01 We’re talking about this all the time, which is, you know, the lines start blurring pretty quickly between real-time image and video.
0:21:04 And then real-time video and interactive, whatever, world generation, world model.
0:21:11 I don’t think we have a good word for what Genie 3 is yet, but you guys called it world model, which is, I think, a great term.
0:21:20 But in your mind, like, where does the video generation modalities stop and real-time worlds take, you know, start?
0:21:25 And do you think in the future, are these converging into basically one modality?
0:21:32 Or if you had to predict over the next few years, do you guys think, actually, yeah, these will diverge into completely different disciplines?
0:21:38 It seems like they share kind of one parent today, which is, you know, video generation.
0:21:40 But where is the world going, do you think?
0:21:41 Are these two completely different fields?
0:21:44 From my perspective, they’re different.
0:21:47 So I would say modality is one thing, right?
0:21:48 We have text, we have audio.
0:21:52 Even within audio, there are different type of sub-modalities.
0:21:54 Speech is not the same as music.
0:21:57 We have different products for music generation.
0:22:00 We have other models for speech generation, speech understanding.
0:22:04 So even within one modality, you can have different flavors.
0:22:08 And then, of course, you have video and other things.
0:22:13 So I think, basically, I would say the modality is one dimension.
0:22:20 And another is how fast or how quickly we can create new samples.
0:22:28 And completely orthogonal, maybe, the direction or dimension is how much control we have, right?
0:22:35 So I think we kind of picked a specific direction or a specific vector in the space for Gen3.
0:22:41 I think different products, different models can try and go in a different direction.
0:22:45 I think the space is pretty big and there are a lot of trade-offs to be made.
0:22:48 So, yeah, I don’t know.
0:22:49 I think it really depends.
0:22:53 Some people believe there is, you know, one model that will do everything.
0:22:57 Or I think there is still open-ended what’s the best way.
0:23:01 Like, we’re in a place where engineering is a big part of our research, right?
0:23:03 And actually making those, like, it’s not a paper, right?
0:23:06 Where we want to build something that people can actually use.
0:23:13 So I think this really makes it, like, abstract ideas go to get you to some point.
0:23:16 But to actually build things, you have to make some concrete decisions.
0:23:19 And I think it kind of, like, forces you to decide what you want to do and what you’re doing.
0:23:23 Yeah, I think this is a really interesting point, Mike.
0:23:27 And ultimately, it has to be driven by, like, technical decisions.
0:23:31 And also, like, the goals, right?
0:23:41 So we, if you look at the models right now, we obviously made a choice that we want VO3 and Gen3 to be separate projects this year, right?
0:23:48 And if you look at them both, as they are right now, they have very different capabilities that the other models not have.
0:23:59 And technically, to combine all of that already into one model, right, would be, I think, very challenging to, I mean, VO3 is clearly a higher quality threshold than Gen3, right?
0:24:03 And it has very different priorities, right?
0:24:10 So then the natural things you could say, oh, well, you know, what if we just took these together and combined them?
0:24:15 But that may not be the best next step for either of those two models, right?
0:24:22 So it may not be the case that the thing that the other one has is actually the most compelling thing for a completely different experience.
0:24:34 And I think that given the breadth of interest in both models, right, there’s actually quite a small set of people that are, like, really actively using both.
0:24:41 And they tend to be more folks like yourselves who are just more broadly interested in AI, right, rather than, like, really downstream use cases.
0:24:53 So, like, you mentioned agent training, for one, which is very, sort of, like, high-action frequency, requires more ecocentric, sort of, I guess, more, like,
0:24:55 worlds where tasks can be achieved.
0:25:01 It doesn’t require, you know, like, high-quality cinema style videos you could generate with a VO model, right?
0:25:02 It’s quite different.
0:25:08 And then on the filmmaking element, I mean, I’m not so sure that Gen3 is really there at this point.
0:25:11 And that would be necessarily the goal.
0:25:12 I don’t know.
0:25:17 On filmmaking, Justine can do some pretty incredible things with the filmmaking tools today.
0:25:18 You’d be surprised.
0:25:21 Give me access and I will make amazing films with Gen3.
0:25:28 I guess that did kind of get to one of my questions, though, which is the work you guys are doing is incredible.
0:25:34 And you clearly probably have so much going on in your brains just to coordinate training these models and managing these teams.
0:25:41 How much do you also have to think about, like, what are the downstream use cases of the model when you’re training it?
0:25:46 Because you could imagine a world in which you’re just like, we don’t really know or care what people are going to do with it yet.
0:25:48 We’re just going to go in the research direction.
0:25:50 We think we should go and see what happens.
0:26:02 But based on how you guys are talking about it, it sounds like you’ve also been pretty thoughtful around what are the different capabilities or features needed for different potential use cases, at least, of different models.
0:26:10 Yeah, I’ll say that basically we have some applications in mind, but that’s not what’s driving the research.
0:26:15 It’s more about how far can we push in this particular direction?
0:26:23 Can we make all of that work, like, really great quality, really fast generation, real time, very controllable?
0:26:28 So I think that’s kind of what drives us, I think, to develop Genie3.
0:26:37 And the applications kind of, like, follow, and I don’t think, you know, to be honest, I don’t know what would be the applications for, like, I think we’re very surprised.
0:26:49 You know, I’d like to mention, like, we all, people find new ways in how it can be useful and to prompt it to have, like, visual stuff, you know, people just discovered it, right?
0:26:50 We didn’t even think about it initially.
0:26:56 So I expect kind of the same thing, and I think that’s why I am excited for more people to be able to access in the future.
0:27:07 And in general, our approach is to make sure that over time there is more access to the models we build.
0:27:11 And I think that’s the only way to discover what’s the real potential.
0:27:20 I guess one, somewhat related to that, like, how do you think going forward, like, Genie 4, 5, or any other models, like, what is, like, top of mind right now?
0:27:33 Like, if you wanted, for example, to focus on, I don’t know, like, seems like gaming could be one of the applications, having multiplayer type of games where you have two special memories or two different completely views, but at some point they merge.
0:27:37 How are you thinking on, like, going forward, like, what’s next?
0:27:40 Is it, like, scaling these models just on more data, more compute?
0:27:48 Is it creating this sort of, like, multi-universe type of things where you have multiple players, multiple people looking at the same model, but in different views?
0:27:49 What’s, like, top of mind for you guys?
0:27:53 Top of mind, I think, for the next few days might be a vacation.
0:27:58 After that, maybe walking my dog in the real world.
0:28:04 And then, I think, you mentioned a bunch of really interesting things, to be honest.
0:28:10 And, like, I think we are, we’re still collecting a lot of feedback on this current model, right?
0:28:17 And I think that, in general, we are most interested in building just the most capable models, right?
0:28:26 And so, we would hope to have even broader impact in future, and really enable other teams to do cool things with it, right?
0:28:27 Both internally and externally.
0:28:35 And, for me, it’s, like, I just started this with, like, a very, very focused vision about AGI.
0:28:46 And I still think, honestly, for my, what I’m excited about for AGI, which is more embodied agents, I really believe this is the fastest path to getting these agents, like, in the real world.
0:28:49 And I think we made a big step towards that.
0:28:56 But, and still, like, I’m sometimes even more excited about applications I never thought of that come up from other people seeing the model, right?
0:29:04 So, I think it’s kind of this, like, trade-off of, you know, obviously you want to focus on some applications, but then you want to be open-minded about others.
0:29:12 And I think that’s the real joy of building models like this, right, is you get to see all of these people can be way more creative than me with it.
0:29:15 So, I think that there’s, like, always really cool things that we can do.
0:29:20 And I honestly don’t really, can’t really tell you in one year what the biggest application will be.
0:29:24 But we’ll definitely be trying to build better models.
0:29:26 Yeah, I’m really excited about it.
0:29:30 I think we’re only as impressive, you know, maybe the model is.
0:29:39 I think they’re very far from actually simulating the world accurately and being able to do, kind of put a person in there and then do whatever they want.
0:29:47 And, I mean, when I say far, it doesn’t mean it’s far in terms of, you know, calendar time because we are really in an accelerated timeline.
0:29:51 But it feels like there is more work to do to get there.
0:30:06 And I think I just imagine, like, once we can actually, you know, whatever the form factor would be, but step into this world and just kind of, like, maybe tell it how you want to, what you want to experience.
0:30:07 There’s so many applications.
0:30:14 Imagine, for example, someone is afraid of talking to people on a stage or in a podcast, right?
0:30:16 They can simulate that, right?
0:30:18 Or you can have someone who is, like, afraid of spiders.
0:30:22 They can maybe actually see themselves getting over that.
0:30:27 So that’s, like, you know, just one example of something that’s, actually, my wife thought about it.
0:30:28 It’s not my idea.
0:30:32 So I think it’s really, like, there’s so many things, right?
0:30:40 So I think this is just, it’s all hinges on the ability to simulate the world and maybe put ourselves in it.
0:30:46 Maybe seeing yourself from the side and potentially having agents interacting with things.
0:30:52 And, yeah, the realism and really making it work in the way that is similar to our world, I think it’s really key.
0:30:56 I am actually personally petrified of skiing, and the model is already quite good at that.
0:30:59 So I might, when things quieten down, spend some time.
0:31:03 Because I promised my wife that our children would grow up knowing how to ski.
0:31:08 And we’re getting close to the age where I have to live up to my promise, and I’m not sure if I want to do it yet.
0:31:13 So we have to improve the model for you, Jack, so you can actually get that in distribution.
0:31:14 I hope so.
0:31:20 We were just talking about, before we started, that we might see applications, like, in robotics.
0:31:22 I mean, Jack, you were talking about embodied AI.
0:31:25 And, like, now, like, limitation in robotics is the data, right?
0:31:27 Like, how much data you can collect.
0:31:35 And now, probably, you can just generate a lot of different scenes that you were not able to do before, purely from, like, just recording videos or so.
0:31:37 So I think that’s another thing that is pretty exciting.
0:31:40 And, I mean, congrats on the model.
0:31:41 It’s phenomenal.
0:31:50 On the robotics application, there was a conversation that I was listening to from Demas yesterday, where he was talking about your guys’ work on Genie 3.
0:31:56 And he mentioned that there’s an agent, I think you guys call it Sima, right?
0:31:58 Which can then interact with the Genie agent.
0:32:12 And as I was hearing him describe it, which was kind of breaking my mind, which is that you had one simulation agent asking the world, asking the Genie agent to essentially create a real-time environment for it to interact.
0:32:19 Right, which was when I realized, oh, the way you guys have built it, it’s composable with other agents.
0:32:23 Can you talk a little bit about why that’s so important for robotics, like Marco was saying?
0:32:35 And what are the major limitations today that you think we’d have to overcome as a space to make the robotics sort of progress, the rate of progress in robotics, much faster than it is now?
0:32:39 So we designed it to be an environment rather than an agent, right?
0:32:42 So Genie 3 is very much like an environment model.
0:32:47 Like we don’t see it as like an agent itself that can like think and act in the world.
0:32:51 It’s more just a general purpose sort of simulator in a sense, right?
0:32:54 That can actually simulate experiences for agents.
0:33:00 And we know that like learning from experience is a really important paradigm for agents, right?
0:33:07 That’s how we got AlphaGo because the agent AlphaGo learned by playing Go by itself, trying new things, right?
0:33:14 And then learning from feedback with reinforcement learning, learning to improve itself and actually discover new things.
0:33:19 It’s like it discovered new moves at Move 37 that humans didn’t think was a worthwhile move, right?
0:33:23 But actually AlphaGo learned that it was because it could experience and try things for itself.
0:33:29 And in robotics, we have this paradigm right now where there’s some data-driven approaches, right?
0:33:37 Where you can collect data in a quite laborious way, but it looks like the downstream task.
0:33:41 So it looks real and there’s not so much of a mismatch between the two domains.
0:33:44 Or you can learn in simulation, right?
0:33:47 But the robotic simulations are even the best ones.
0:33:50 And we have some of the best ones at DeepMind with Majoko, right?
0:33:51 Which we work with.
0:33:54 They’re still quite far away from the real world, right?
0:33:56 And so you have the sim-to-real gap.
0:34:13 But even the sim-to-real gap itself, I think is kind of like poorly named because what people consider to be real in robotics is typically still a lab or some very constrained environment where you’ve got a bunch of spotlights on a robot and then tons of researchers crowding around, watching, you know?
0:34:33 Whereas really real for me is, it’s the ability to walk my dog when I’m too busy, to hold the lead, cross the street, you know, see someone who’s scared of dogs, know to go around them, see someone with a ball, change directions, like all these challenging situations in the real world, right?
0:34:40 And of course, you still have gripping, you still have these other tasks, but you need to really discover your own behaviors from your own experience, right?
0:34:49 And that’s, that doing that in physical embodied worlds is super challenging because there’s so many reasons why, firstly, that could be expensive to collect data in those settings.
0:34:54 You’d have to keep moving the robot back to where it started every time it doesn’t do something right.
0:34:56 And also it could be unsafe, right?
0:35:03 So there’s many reasons why we can’t really do learning from experience in the physical world, right?
0:35:08 So we do it in simulation, but really what we think with Genie 3 is it’s the best of both, right?
0:35:12 Because you’re taking a real world data driven approach, right?
0:35:14 But then you’ve got the ability to learn in simulation.
0:35:18 So it kind of combines the good parts of each of those.
0:35:31 And so that’s why I think it could be super powerful, not just for a robot example, but I really love this idea of having, when it rains in London a lot, not having to take my dog for the second walk would be great.
0:35:36 And as you can see, we built a model basically for Jack’s personnel.
0:35:40 Vacations, that’s what driving the product is.
0:35:43 There’s a lot of dog owners out there.
0:35:43 Yeah.
0:35:46 I’m just saying, clearly, Jack, it’s time to move to California.
0:35:47 Yeah.
0:35:48 That’s the solution.
0:35:50 Less rain.
0:35:51 Less lag.
0:35:56 I mean, I personally love California, but my wife’s not, my wife’s not convinced, sorry.
0:35:58 We’re convinced here.
0:36:03 Yeah, just to touch on, you know, maybe a final point on the robots, kind of like robotics part.
0:36:08 I think there are, like, it’s definitely, you know, robotics means it’s more than visual, right?
0:36:10 Like we need to be able to, I think this is an important point.
0:36:21 We want, we can drive the decisions of the robot by looking around, but still it has to, to kind of like, you know, do actuations, decide where to move, how to respond to the environment.
0:36:36 So I think there are definitely some gaps, but still at the core of the problem, being able to reason about the environment, we think this is something that’s, that’s the, you know, world models, or general purpose world models, such as Genie3, can really help with.
0:36:50 And, and, and maybe with future research, we can actually bridge those gaps of physical, kind of like understanding and actually getting responses, physical responses from the world, which is a very interesting direction to explore.
0:36:55 One last question from my side, the, and I don’t know if you can answer this, but like, is it going to become public?
0:37:00 Like, can developers access it at some point, or is there like some sort of idea on this?
0:37:04 So as you can see, we are very excited about having more people accessing it.
0:37:07 So we’re, we’re definitely want to make it happen.
0:37:15 There is no kind of like a concrete timeline at the moment, but you know, I’m sure once we have more to share, we will do.
0:37:24 One of the things I’ve been thinking about a lot is we see sort of with every like modality, like, you know, maybe first LLMs and then image and video and audio.
0:37:29 There’s like early kind of glimmers of something really exciting in a project or a research preview.
0:37:34 And then there’s like a ton of data and compute and researchers kind of poured out the problem.
0:37:43 And, and you hopefully see this sort of like exponential progress till you eventually get to the point where like you’re out of data or, or the improvements don’t come as easily.
0:37:49 I’m wondering for your thoughts, like where we are on sort of that curve for world models.
0:37:50 That’s a really good question.
0:37:56 I actually have a super hand wavy, somewhat swerving answer, right?
0:37:58 And I think it’s actually both.
0:38:03 So I think the current capabilities are actually already quite compelling.
0:38:14 And so you could make the case that like, if what you wanted was a minutes of auto realistic, any world generation with memory, that could actually be the end goal, right?
0:38:17 And two or three years ago, I probably would have said that was a five year goal.
0:38:29 And so at that point, if you just wanted to improve that, I think you probably end up with this, maybe like, I think the jump from Genie 2 to Genie 3 was, was absolutely massive.
0:38:37 And went from being like, kind of a cool bit of research that was like, showing signs of life, something that could already be very compelling.
0:38:40 But I think there’s a lot more that you can do with this.
0:38:42 And Sholemi kind of referenced this to himself, right?
0:38:45 Like, it’s not the case that you’re dropping yourself in the world, right?
0:38:48 And like, it’s like the real being in the real world, for example.
0:38:50 It’s actually quite different to that.
0:38:55 When you do, you know, take a minute to look away from view to screen, it’s quite a bit richer out there.
0:38:58 And that’s just for the real world.
0:39:01 We also want this ability to generate completely new things, right?
0:39:06 So I think we’ve got a huge gap to close, right?
0:39:09 With the new capabilities that we want to add.
0:39:12 But I think it’s maybe a bit different to language models.
0:39:14 Or actually, maybe it is similar to language models.
0:39:19 But with language models, there’s been like lots of new steps that have actually come on top, right?
0:39:21 That maybe we didn’t think were possible.
0:39:22 We thought things were plateauing.
0:39:26 And then a new idea came that made a significant change.
0:39:29 And that has happened a couple of times in the past few years.
0:39:32 So I think that there’s a few more of those left, for sure.
0:39:36 My final question for you guys is, are we living in a simulation?
0:39:38 Oh, yeah.
0:39:40 That’s every angle just to finish.
0:39:45 My thinking about that is, actually, yeah, I thought about it a bit.
0:39:53 I think that if we live in a simulation, my take is that it doesn’t run on our current hardware.
0:39:58 So it’s analog and not like, you know, it’s continuous.
0:40:08 All of the observations are continuous and there is nothing like, but maybe the quantum level is, you know, some limitation of our, you wanted to go philosophical.
0:40:14 It’s some kind of like a hardware limitation of the simulation we run on.
0:40:17 So, yeah, take it or leave it.
0:40:19 That’s a great answer.
0:40:21 Clearly, it’s a lot of work for the TPU team to do.
0:40:28 Yeah, maybe quantum computing will be actually, will be running our actual simulation.
0:40:29 So, yeah.
0:40:30 That’s a great place to wrap.
0:40:32 Shlomi, Jack, thank you so much for coming on the podcast.
0:40:33 Thank you, guys.
0:40:34 Thanks, guys.
0:40:34 Thanks for having us.
0:40:39 Thanks for listening to the A16Z podcast.
0:40:45 If you enjoyed the episode, let us know by leaving a review at ratethispodcast.com slash A16Z.
0:40:48 We’ve got more great conversations coming your way.
0:40:49 See you next time.
0:41:05 As a reminder, the content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
0:41:10 Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
0:41:17 For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
0:41:26 A16Z.com forward slash disclosures.
Genie 3 can generate fully interactive, persistent worlds from just text, in real time.
In this episode, Google DeepMind’s Jack Parker-Holder (Research Scientist) and Shlomi Fruchter (Research Director) join Anjney Midha, Marco Mascorro, and Justine Moore of a16z, with host Erik Torenberg, to discuss how they built it, the breakthrough “special memory” feature, and the future of AI-powered gaming, robotics, and world models.
They share:
- How Genie 3 generates interactive environments in real time
- Why its “special memory” feature is such a breakthrough
- The evolution of generative models and emergent behaviors
- Instruction following, text adherence, and model comparisons
- Potential applications in gaming, robotics, simulation, and more
- What’s next: Genie 4, Genie 5, and the future of world models
This conversation offers a first-hand look at one of the most advanced world models ever created.
Timecodes:
0:00 Introduction & The Magic of Genie 3
0:41 Real-Time World Generation Breakthroughs
1:22 The Team’s Journey: From Genie 1 to Genie 3
5:03 Interactive Applications & Use Cases
8:03 Special Memory and World Consistency
12:29 Emergent Behaviors and Model Surprises
18:37 Instruction Following and Text Adherence
19:53 Comparing Genie 3 and Other Models
21:25 The Future of World Models & Modality Convergence
27:35 Downstream Applications and Open Questions
31:42 Robotics, Simulation, and Real-World Impact
39:33 Closing Thoughts & Philosophical Reflections
Resources:
Find Shlomi on X: https://x.com/shlomifruchter
Find Jack on X: https://x.com/jparkerholder
Find Anjney on X: https://x.com/anjneymidha
Find Justine on X: https://x.com/venturetwins
Find Marco on X: https://x.com/Mascobot
Stay Updated:
Let us know what you think: https://ratethispodcast.com/a16z
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.